Cliff Stamp

Scientific testing is impartial, and most of all, repeatable. Taking the human out of the equation removes the greatest amounts of bias. Best of all, you can point and say "Hey, this is exactly the amount of force it took to produce this result."

What does this say about the jungle testing done by Jeff Randal? Why are his subjective tests OK, while Cliff's aren't? Cliff spends a lot of real time using real knives chopping real wood. Why is this any less real (and meaningful) than what Jeff reports on from use in the jungles of Peru? The parameters of the subjective here are different. Cliff is one person testing many dozens of knives, while Jeff reports on the likes/dislikes of dozens of people testing a few knives, but each result is therefore valuable for different reasons.
 
The point some of you seem to be missing is that science & subjectiveness start meaning a great deal when a reviewer begins using quantitative measures for performance loss / gain. I refer to reviews such as this -
http://www.physics.mun.ca/~sstamp/knives/MEUK_talonite.html

When you start tossing up impressive looking graphs and such, without addressing the underlying methods behind those numbers, you see it for what it is - subjective analysis masked as scientific.

Look, I'm not saying that Cliff's methods aren't without their merit. But, when you take damaged blades, then damage them even further and say "hmmm, the performance was off" I have to roll my eyes.
Look at his Strider review - to paraphrase a portion he says "Hmmm, the ergonomics of the inner wrapped & bare steel tang aren't very good". Duh! Taking the handle off the knife and remarking on the ergonomics is beyond me.

Limbing trees is a great test. Testing edges on cardboard or cloth is great. Tip snap tests, hey, no problem. I can understand chopping cloths hangers or doing impact tests on concrete. What I don't understand is taking a broken knife, and breaking it again and again and making performance conclusions as if the knife hadn't been damaged to begin with. That's beyond me.

Kevin
 
Originally posted by Ron@SOG
And I'll donate the SOG portion of the knives for such an evalutation! I want them to be the "Z" knives...OK? ;)

Ron,

You can send ME a few SOG knives...(Trident, Recon Bowie, Night Vision, etc.)...I'll "real-life" test 'em, and send you my testing results in a few years. How's that sound?.:D.
 
Originally posted by Buzzbait
As for the whole idea of controlled testing through machines, I do believe that there is a certain amount of validity to the idea. I don’t think for a minute though, that a machine could ever take the place of a person for the entire testing process. A machine could do an adequate job of testing cutting efficiency and maybe lateral strength, but that’s about it. It would be very hard to simulate something like twisting a blade that is buried in a hunk of hard wood. Push cuts might be easily tested, but slicing would be much harder.

I agree. By focusing on precise repeatability through use of machines, you're sacrificing relevance for repeatability. Yes, you can do interesting tests with a machine -- I think tests of abrasion resistance might fit well, for example. But for, say, slashing tests, or chopping tests, the effects of human-induced error in the strokes is part of what's relevant. Machine tests absolutely can't replace testing like Jeff Randall's, or Cliff's, though they can be interesting supplements.


Joe
 
I think alot of this controversy could be resolved if Mr. Stamp would reveal some verifiable credentials so that the vast internet audience had some idea of his background. My skepticism with his methods essentially results from the lack of industry standards to be used for comparison.

To his credit, however, he does get out in the woods and chop things up, so there is a realistic element to many to his ventures.
 
Ron and Spark,

Re: ergo's, performance, etc.

I went back and tried to find the things you guys mentioned in Ciff's reviews but I was not sucessful.

Would guys mind posting 1) the page link and 2) Some text that marks the lines in question?

I would like to read it for myself. Also any place where Cliff ragged on SOG would be nice too.

Thanks,
Dave
 
Hi BurkStar,
Originally posted by BurkStar
Ron, my posts have nothing to do with SOG knives in general (I own several and will probably purchase/trade for more) and I am really impressed with SOG's (your) presence on this and other knife forums and the fact that you do read and respond to the posts. The only question in my mind is your dogged determination to discredit Cliff and the implied question of our ability to be able to use his information in an intelligent manner.
I appreciate your response more than you know. It shows the dichotomy in which I find myself.

First, I think almost everyone here in the forums can see that “customer service” is what I’m all about and that for the most part, I’m fairly good at it. I’m working for SOG and believe wholeheartedly in our products and company. That being said...along comes Cliff.

The man knows more about knives and science than most of us put together. He is offering much to the knife community. Let me throw out an abstract number: I think maybe 60% +/- of his reviews truly have strong benefit (your percentage will vary :) ). It is this other portion that truly concerns me. Here are several reasons:
  • The Average Knife User - There are many readers who are impressed with fanfare of facts and figures who don’t understand their conclusions. There are also those who believe that if a knife breaks in an evaluation, the knife is a failure and should not be bought. This is compounded when the reviewer in question does not break every knife he reviews. There are a number of users who can “eat the meat and spit out the bones” in most every review. But unfortunately, I have personally talked with many who have been unduly swayed by Cliff’s reviews. BurkStar, I’m sure you (and others) are not one of them.
  • Cliff is a Different Reviewer -Most knife reviews on BladeForums are from “Tom, Dick or Harry” telling us what they think about their latest knife purchase which was christened on last week’s camping trip. Each of us understands this context. But when Cliff writes a review, the context is quite different. Here we have someone who has done “thorough, scientific” evaluations of at least couple dozen knives. There are nothing like them in knifedom; not even close. He is educated and presents these reviews as professional. Accordingly, they cannot carry the same weight as other reviews, because they are not received with the same weight.
  • The “Science Reviews” - This has been discussed in detail. Cliff’s reviews are presented as scientific. But much found in his writing are in fact very unscientific. It is difficult for the uninitiated to distinguish the difference. Cliff keeps talking about improving his methods and I think he is. If Cliff can continue to improving, he will certainly receive much less flack. Who knows, maybe someday, he’ll be the biggest and most productive consultant in the knife industry. But that won’t happen with his current practices.
BurkStar, please hear me very closely here. You ask me about my "dogged determination to discredit Cliff.” My goal has never been to discredit him. I’m trying very hard to firstly, clear up disinformation about SOG and secondly, help Cliff understand the areas he needs to work on to better perfect his testing procedures. You have to understand that Cliff is very arrogant and self absorbed. So receiving information like this from us (or people like us) just isn’t in his nature. By nature, I’m a very tenacious! So in my determination to fulfill these two stated goals, it may seem that I’m trying to discredit Cliff. And to be honest, in sheer exasperation, I’ve screwed up and been sarcastic about Cliff (he’s certainly an easy target). Also, what doesn’t come through well in the forums is humor. I have a very, very dry wit. My Saturday Night Live reference was (believe it or not) light-hearted “sparing” and as much a jab at myself (look at Dan Murray’s role opposite Jane Curtin on Weekend Update).

I really hope this has helped you understand where I’m coming from. This truly is not a discrediting Cliff campaign. What would really be of assistance is if Cliff would dial back his arrogance and listen to concerns shared by many in these forums (many who are respected in their fields). These concerns are not without warrant.


DaveH: In a previous post, I've linked to Cliff's Recondo Review which also links to the Recondo thread here in BladeForums. That should be good "starter" reading. Anything else you're looking for?
 
Spark :

Look at his Strider review - to paraphrase a portion he says "Hmmm, the ergonomics of the inner wrapped & bare steel tang aren't very good". Duh! Taking the handle off the knife and remarking on the ergonomics is beyond me.

This is a promoted aspect of full tang vs partial tang knives - functionality in extreme cases of handle failure.

Spark :

When you start tossing up impressive looking graphs and such, without addressing the underlying methods behind those numbers ...

Some of the older reviews like the one referenced didn't get into those aspects as no one expressed an interest at that time. I have been meaning to update them to deal with the current questions. There are indeed many factors which can influence results both in a random and systematic manner. The random aspects tend to average out, however the systematic ones need carefull study. The blade evaluation page will be where this will be discussed. It is still in its early stages and many aspects are still not covered.

I have been meaning to add to that page specifically comments on the hemp cutting as I have been doing a lot of it lately and found that it is really method dependent. Some factors are obvious but some are not and were surprising. I am also still working out how to turn the numbers into simple meaningful statements which depends on coming up with a model that reproduces the results to a decent degree which means understanding the process in more detail than I currently do. Unless you just want to use an empirical model which I don't, but may have to later on.

So yes, even simple things like cutting cardboard or rope can indeed be effected by method, factors such as speed, backing material, where on the rope you do the cutting (how far from the free end etc.), tension, etc. . As an example, the speed in which you cut cardboard will significantly influence the rate of blunting, one of the old reviews commented on this in some detail. Thus you have to cut at a constant rate, again small random changes will average out, so just take care not to rush with one blade to finish up for example. I would however not agree with the proposition that these issues are so very complicated that they can't possible be understood. We are talking about very basic issues and how they apply to a simple wedge.

Of course, some aspects can't be eliminated and have to be treated in separate ways. For example chopping comparisons will be influenced by swing technique. There are two main techniques; a very hard drive with *heavy* wrist torque, and a very fast, snappy swing with light wrist torque [usually with a draw]. Which technique is optimal depends on the physical characteristics of the user and the blade (mass, balance, length and grip). This is something else I have been meaning to look at in more detail. The results I quote for chopping comparisons are for the first method unless described as otherwise. Someone of different physical characteristics could indeed get different results, depending on how their body type favoured the blade, this has been discussed in the past. Wood type can also influence it by effecting penetration and binding. In detail, the greater the penetration the greater the influence of the primary grind, and thus the lower the effect of the edge geometry. I have been meaning to bound this spread, but unfortunately don't have access to any of the very hard woods.

Not2sharp, in regards to Ron's quote, what I estimated was the impact energy not the force, two very different quantities. The full details are in the review if you want to discuss the details fire away. Basically to estimate the impact energy, you need the time of the swing, distance covered and impact location. These can be well estimated by simple means. To reproduce the force of the impacts you need this estimate and the time of the impact which depends on the compressibility of the metals (mainly the bar) and the strength of the person holding the knife. To put an upper limit on the strength of the person you would just have the knife fixed in place, or could use some specific tension (spring) and state what it was. The looser the spring the lower the force from the given impact energy. The bar was probably just mild steel, which could easily be determined and reproduced.

In regards to machines being necessary, this argument doesn't even make sense from a basic perspective. It implies that people could not do informative work before we had such machines. Yes machines allow a higher precision which is always good as it allows examination of finer details. However, first of all, there is absolutely no correlation between precision and robustness (repeatability). Secondly, I would argue that performance which is significant enough to be noticed by a human user, can be estimated by simpler means (this should be obviously be readily self evident) However machines have one huge benefit which is they would allow you to do things much faster and thus would be of great benefit as you could look at more aspects.

For example I keep meaning to look at the slicing aggression of an edge as a function of grit size in micron and check if it is linear, the edge retention influence (which is nonlinear) and angle influence on both which I think is quadratic. The time it would take to get a solid conclusion is about 150 hours to do all the cutting (a lot of rope), and about 10-20 hours to log all the data and look at it in detail. With a CATRA-type machine it wouldn't take a day. The main cause for the large times when you are doing something by hand is repeating it to refine the measurements and eliminate the variance due to sharpening effects, consistency of materials, cutting technique etc.. Even when you do that you have to take care as experience can influence the results. For example with the hardwood dowel cutting, the more of it I do, the stronger my wrist gets and thus I have to keep rescaling the older results every few months (I have yet to update those
webpages).

To be clear, machine work would not replace the hand work just as stock testing doesn't replace normal knife work, it would just be complementary. The other reason I have not done the grit thing is that the all the sharpenings (about 75) would change the blade profile far too much. I have solved this of late with an Olfa Extra-heavy Duty cutter. The replacement blades are cheap and thus I can easily keep the profile constant by changing blades for every grit. It also allows a consistent reference for subjective sharpness comparisions (ease of slicing paper, shaving etc.) .

As an example of "human" testing, lets assume you want to see how tree type effects chops needed. You do this by keeping note of the chops required for specific types of trees. This is where the above posters would jump in exclaim "Aha, that is unscientific. You are ignoring random swing factors and not even taking into account the size of the tree and location - duh !" [light and soil type will effect density *greatly* even in the same wood class]. This is all true, however these are random factors and will average out in the long term. Results would not be very accurate if you compared a young and soft 4" spruce to a 12" seasoned pine, however after chopping through 1000 of each (this isn't an active years worth of wood chopping), the average size and nature of each type will be very consistent and allow a very accurate estimate.

Now a more relevant point would be : "What about if the Spruce trees in your area are consistently bigger or smaller than the Pine". Lets assume for example that a generation ago wood of one type was selectively cut (this was the case for Birch by the way). In this case the wood size would not be evenly distributed between Pine and Spruce. Same goes if one one type grows faster than the other, or reaches a larger fully grown size. To eliminate this factor you could go around the area in which you were cutting and estimate the size of the trees and see if there is a consistent difference (just keep a running sum).

Of course you could also just keep track of the size of the tree and the number of chops at the same time and compare scaled results, this would allow a much more precise work (which would mean you didn't need such a large sample). But this isn't even necessary if you do enough work. You can see this quite clearly by asking an exerienced axemen how many chops it will take him to cut through a tree if you allow them to do a test cut to check the wood. The best can tell you to within a chop or so. They have not even bothered to do the above counting by the way, its just the effect of a much larger sample size requiring a much lower precision necessary in the sampling.

Ray, thanks for the comments, but I made the decision to do the block chopping, so the responsibility for it is mine. I stand behind it for reasons as I have noted in detail before. It isn't even what I would consider one of the hardest tasks for a heavy use knife, lots of wood work is more difficult on a blade.

Mike, you have made a solid point in that the most valuable results for any individual with a knife they own is what they can do themselves. However there are lots of reasons to read what someone else has done. They might have knives you don't, have access to materials more readily, are more experienced with a certain technique or you just want to look at QC issues and manufacturer responce. For example in the recent thread on the Valiant Golok I was discussing various aspects of vegetation with Jimbo. What he describes doesn't grow around here so I have no way to get any experience with it directly. I also don't have access to trees of the size that he does, nor of the extensive leans that populate his vegetation.

Thanks for the support for those that offered it, and what is more did so clearly without attacking anyone. We are after information exchange here and that is the right path for that goal.

-Cliff
 
Please notice that in every post I have made in this thread, I have said that there is value in subjective testing.

But, when it comes to making a head-to-head comparison of two knives, I'd like to see a more scientific method with an emphasis on consistency and repeatablility so that all of the knives involved are subjected to the same stresses.

Consider, for example, another thing that frequently gets reviewed, cars. A good review includes careful, objective tests, 0-60 time, 60-0 time, 1/4 mile time, fuel economy on a dynamometer (sp?). Even things like noise level can be measured and, therefore, compared between models. But, a good review doesn't stop there. Several drivers with vast experience take the car out and drive on everything from freeways to back roads in good weather and in bad, day and night. There impressions are important too. And then there's the little things that can be very important but virtually impossible to scientifically measure. A few weeks ago, Click and Clack, The Car Guys on NPR, noted that neither of them liked BMWs. Why? Not performance. Not quality. Not safety. No, neither of them liked the way BMWs smell inside. Wow! Who's have thunk it. But, if I was shopping for a new car, I might not, in the excitment of test drive and so forth, notice something like that. I could be stuck with a smelly car. On the other hand, when I make my own test drive, I might decide that it smells fine to me. The point is that having heard their review, I'd know enough to take a careful whiff when I made my own evaluation. So, there definitely is value in subjective evaluations.

The problems I see today are that we have little if any objective testing. It's all subjective. Second, a lot of that subjective testing ends up disguised to try and make it look objective. And, finally, people are making serious pronouncements, "This knife is junk, this other one is the ultimate...," based purely on subjective testing. I don't want to take subjective and practical testing out of those decisions, but I'd really like to see some more scientific data brought in.

If you go to your doctor and say, "I have a pain in my gut," that's subjective. Your doctor immediately starts to think... anything from indigestion to food poisoning to stomach cancer. Before he goes off making a major pronouncement, either "I'm scheduling you for surgery to remove your stomach," or, "It's probably nothing. Here's some pain pills. Go away," you'd probably like him to collect some objective data, maybe take an x-ray or some lab tests or something. Your subjective observation, "my guy hurts," is an important part of the diagnosis (ask a vet. who often doesn't get that sort of data), but it's only part of a good diagnostic process.
 
Ron,

I went back and read the recondo test and the subsequent threads. My understanding is:

Cliff used a pipe to beat on the knife, snapped the tip off, and used a wrench and a wise to break the blade and also a hammer some pieces out of the blade.

Now my unscientific, subjective opinion:

Was it "real world", not mine, but probably someones. Was it "scientific"? Until we decide what scientific is, it's not more or less scientific than anything else. Did Cliff say it was a good or bad knife? Nowhere I could tell. Did he say it's depends on a persons needs and it's hard to tell from one sample? Yes.

Now some comments about the "other 40%" of the users mentioned in your post.

Average User - first a very small number of average users will ever see Cliff's review. Further any unduly influenced are probably the ones that are equally unduly influenced by marketing claims. When you say you have talked with "many" how many is that? Less than 10?
And what percentage is that number compared with the overall sales of this knife? After all, if we are scientific, let's have some numbers that prove this significant impact you seem to imply.

Cliff is different - Same argument as above, if CLiff as seen as influential or more so than other reviewers. Then let's have some numbers. Did you take a poll? No conjecture, we want to be scientific.

The Science reviews - As mentioned before Cliff is nearly the only one doing this type of tests. Unless someone else is even trying, seems like it's hard to criticize what he does.
 
I agree. By focusing on precise repeatability through use of machines, you're sacrificing relevance for repeatability.

Are we really sacrificing relevance? Suppose we had a standard product label that we could apply to every knife, and a standard mechanical proceedure to determine the ratings on the label.

It might look something like this:
===========================================
Description
Blade: 440-C
Handle: Linen Micarta
Lock: Liner Lock
Sheath: None
RC: 58

Performance
Wear: 10% 1 minute, 45 seconds
25% 4 minutes, 15 seconds
50% 10 minutes, 3 seconds

Lateral Strength: deflection: ?? footpounds/sec
permanent bend: ?? footpounds/sec
Fracture: ?? footpounds/sec

Cutting Strength: deflection: ?? footpounds/sec
permanent bend: ?? footpounds/sec
Fracture: ?? footpounds/sec

Edge performance: 10 Ftlbs= ??mm
50 Ftlbs= ??mm
100 Ftlbs= ??mm

Corrosion: ?? hour
============================================

The actual values on the table would initially mean very little to us. Who knows how much pressure they are putting on a blade, or how much wear the material they are cutting will generate. But, they would provide meaningful information over time. If you want better edge wear you could simply go out and find something with a higher rating, the same for edge performance, strength, or anything else we decide to measure.

The results would become meaningful as a point of comparison; and would be extremely useful in comparing various products, or evaluating a maker's quality control.

Doing something like this would be a relatively straight forward exercise. Unfortunately, the market for knives is currently premised on mutual ignorance. We don't know how products compare, even products comming off the same assembly line; we simply buy on appearance and gut feel.

It is no wonder why people are constantly asking whether knife X is worth 10x as much as knife Y, or, what is the best knife for this, that, and the other thing. We have no objective point of comparison. People are always hesitant to come out and say that one knife is better than another; and whenever they do it inevitably leads to questions of material fairness and claims of partiality.

Perhaps the business will grow up at some point. Until, then I will be happy to take all reviews with a grain of salt.

n2s
 
Objective testing by machines is already being done. The output is the spec sheet that the steel manufacturer publishes. Oh, but that is not knife testing you say. Well once a blade with a given edge geometry from a given steel was tested, you could throw the machine away because the data would be the same for every properly heat treated blade from that steel with that edge geometry. In fact, pretty soon, people would notice tight groupings of data around certain edge geometries, so those would cease to be meaningful, and you are right back to the steel spec sheets. So much for scientific knife blade testing. You will never learn more from it than you learn from the steel spec sheets today. It will always be only part of the story.
 
once a blade with a given edge geometry from a given steel was tested, you could throw the machine away because the data would be the same for every properly heat treated blade from that steel with that edge geometry.

I agree; But, how many different blade shapes and edge grinds do we have? The steel type is a relatively small variable by comparison.

n2s
 
Knifenerd, Cliffs Stamp's credentials as I understand are that he has dedicated interest in knives, he works and plays ALOT with them, he tries to find ways to meaningfully convey his experiences with them- in "real world" and "testing" sitations. What more do you want? Maybe only people with PhD's in physics AND mechanical engineering AND metallurgy, etc? There is value in having the "ordinary guy" conduct experiments, not just experts. Cliff Stamp I think is someone in between those two poles.
Another point, I don't ever recall Cliff saying his testing is "scientific," it is simply methodical within the constraints he faces (he is not a corporation with endless resource) and he strives constantly to improve his tests, he takes constructive criticism seriously. Those who say he is "entirely subjective"- no, subjective statements on this forum go something like, "I have not tried the Becker but I am confident the Busse is superior" or
"I don't have a Busse but my Strider will match or exceed, etc"- that is subjective statements. As Joe Talmadge pointed out, Cliff strives to offer a baseline for comparison- he tries to provide information comparatively (whether this is sceince or not, it is useful whatever its limits) so that "average user" can relate.
As for reference to dumb "40% average users"- if they are dumb enough to take Cliff's coments without scrutiny, they are blind enough to take maker's "hype" at face value too.
Most of those who dismiss Cliff, they do not have "consistent" line of criticism themselves, instead they constantly move the goal posts- one day he is not "scientific " enough, next day too abstract, next day not "real world" enough, exaggerate his "destruction tests" but ignore his cutting tests, etc.
This thread is interesting, I appreciate at least that those who post here are willing to "show their stripes"-
Martin
 
Hi DaveH,
Originally posted by DaveH
Now some comments about the "other 40%" of the users mentioned in your post.
I'm not sure what you are referencing. The only comment I can think is:
I think maybe 60% +/- of his reviews truly have strong benefit...
If this is so, let me explain. I was commenting that in reading Cliff's reviews, I personally found that in my estimation, 60% +/- of the text of each review offered wonderful research and applicable data for us knife lovers to read (if we can get through the "thick" stuff). Remember, I was just "throwing out a number." My concern was with the content of the balance of the text in each review.

No reference to "40% of the users."
 
Ron

Thanks for clarifying that. However, I believe my evaulation of your comments is still relevent, even if it's 40% of the review, because ultimately we are talking about people in either case, as they are the ones that make the descisions.

And since you didn't disagree with me, or post scientific eveidence refuting me, I'll asuume I won that those points. :p
 
Back
Top