what's a good set of cutting tests for folders?

Joined
Sep 19, 2001
Messages
8,968
I was wondering if a short list of cutting tests could be made for a sort of baseline comparison to be performed. I was thinking it should revolve around a short shopping list for the hardware store. Maybe an agreed upon type of cardboard to dice, insulated wire to strip, wood dowel to whittle, tie wraps to cut, manila rope, and rubber hose for utility testing. I don't think wasting meat and creating a mess would make the use of real flesh from the butcher a good choice for SD testing; is there some dense foam or other material that would be suitable? And something to compare to leather would probably also be good, along with denim. Perhaps also chopping copper wire or small nails if anyone wanted to risk a knife edge for the sake of sharing info, although I'm not thinking of destructive testing of lock strength or blade flex.

Perhaps a suggestion of what a good level of performance for each test is depending on if the folder is a utility or SD design, giving an A-F grade for # of clean cuts in cardboard, or depth of a stab into a phone book, or how many cuts to sever a 1" dowel, etc.
 
You might want to visit Cliff Stamps website. Cliff does this sort of testing on a lot of current knives. I dont know the URL but, you can Google " Cliff Stamp " & it should pop up. :)
 
hardheart, if what you're after is a way for different people to test knives using some standard tests and test media, then use that as a way to compare results, I think at least some of us who have done a lot of testing have given up on that approach. The problems? Materials vary from place to place and even lot to lot. You can buy a particular kind of rope or cardboard from somewhere, then go back a year later and buy the exact same thing, and see some variance ... and the variance is even worse for people geographically distant from each other. Second, you develop skill as you do these tests -- yes, even with simple tests. So, test one knife today, then test it a year later and you may get different results, even if you saved the original test material! And with different people, results become even more incomparable, especially in tests where big guys can put their weight into a cut.

So, I feel that we can't use a list of "standard" test media as a basis of comparison due to media variation, nor can compare one tester's results to another (or even the same person's results at different times), due to differences and changes in skill.

Well then, how can we do a comparison? The answer for myself has been to always test against a "baseline knife" -- that, and not a set of standard tests or test media is what can help get you valid comparisons. Back when I was testing a lot, every time I tested a knife, I also tested it against a spyderco endura, and a couple of other knives I also used as benchmarks. So, even if this new batch of cardboard was particularly tough, or I'd gotten particularly skilled at a test, one thing that remained constant was that I was comparing the performance of the "test knife" with the endura (and one or two other test knives as well -- the more test knives, the more controlled the results, because you can be more sure that the test knives themselves aren't changing in performance due to constant sharpening or whatever).

So, what I feel you should be after: 1. a "core" list of tests and test media, applicable to every folder, 2. some other tests that you might use for particular types of knives (e.g., light prying for a tactical), 3. at least one or two baseline knives you also re-test every time you test a knife.

Your basis of comparison is the test knife to the benchmark knives, not the test knife in a vacuum.

Joe
 
Cliff's knife site

I use polyester rope and wooden broom handles myself. Poly is pretty good telling you about the aggressiveness of the edge. Broom handles are cheap and also pretty hard and stiff. They are uniform in quality so results are comparable. I whittle them to a sharp point and also see how much effort it takes to cut them into lengths, using a method most suitable to the knife in question. I also slice paper to see how clean the cut is and some paracord to give an overall impression. If the knife is a hard user I might chop into a block of lead and stab into a tin can to see how the edge takes it. I don't use scientific methods or measurement, like CS does, but these usually give me a pretty good feel on a knife.

...oh, and tomatoes. I always cut tomatoes. :)
 
Hmm, very interesting point. I was hoping that manufactured goods could be expected to maintain a greater level of uniformity. In reading about some of the tests, especially camping/survival torture tests, I had noticed how necessary it was for the tester to specify type, size, condition of wood being split, ground being dug into, and weather conditions during the testing. Seemed fairly impossible to replicate completely, and one result of hacking cypress knees could not be directly compared to someone splitting oak, or digging through clay intead of sandy soil.

Perhaps the subjective nature and varying skill levels of the owners/testers might not be a huge problem. I wasn't really looking to go to the level Cliff has. Microscopic views of the edge, measuring grams of force to cut thread, etc. More of a light review method. People often ask for and get 'quick' reviews of knives, but they can be so quick as to simply say-comfortable grip, good steel, cuts lots before it needs sharpening. Instead one could say-I give it an A on cardboard-#of cuts before dulling too much to continue, B- for the dowel-# of hacks, if it bound or glanced easily, and so on. Not at all scientific, but can give a clearer idea, at least I would think so. The owner should also give a little background when teling their results, and I believe many often do-what they do for a living/hobby, which may suggest (or they themselves could spell out) their proficiency and expectations for the test administered (leo, electrician, office professional, avid camper, hunter, etc.)

The knife rags do very similar tests on knives, but I haven't noticed them doing the same set of tests consistently. On one knife test or comparison, they'll slice cardboard, on another they'll forego the cardboard and use rope, on yet another they go for rubber hoses, etc. The reviewer will cut rope with more knives on more tests, but he might not do it for every knife he tests.

The use of a baseline knife would also be helpful. Again, there could be some wild variance, how much has the knife been sharpened by the owner, reprofiling, any QC problems it may have suffered (like a bad heat treat) which specific model-an AUS Endura should perform a little differently than a VG-10, I would expect, along with SS handling differently than Zytel. The only Endura I have is a full serrated SS handle, I'm sure to get different results than a PE Zytel. Perhaps Victorinox? Cheap and useful, the same blade appearing on many different models, and maybe a consistent level of QC to assure sufficient sameness? Opinel is cheaper still, but probably too cheap to expect equal heat treats and edges for multiple model 8s.
 
For me cutting or whittling wood has always been a good test of a knife edge once you understand that not all edge profiles are made to cut or slice into wood efficiently. But it is not always an accurate test for knives made to primarily cut rope or cardboard.

Some knives have an edge geometry that is simply not condusive to whittling or slicing into wood. They may perform quite well in other tasks like cutting rope or slicing newspaper to ribbon but when you go to whittle anything more than a pencil to put a point back on it you find out that it really has the wrong bevel angle for this task.

Some knives are made this way because the maker has deemed the edge he/she put on it to be the best overall for edge keeping, strength or for the purposes for which it was primarily made etc etc.

So, in some cases you may be disappointed with certain knives when you first go to slice into a block of wood especially if it is a block of hard wood. I guess what I'm saying is that how well a blade slices the wood is not an indication of dull or sharp but more of an indication of edge profile.

Any blade could be made to slice better by a steeper edge bevel in most cases. I did this recently with my new David Boye folder. The first time I went to slice into a block of maple it was very disappointing in performance because Davids bevel is actually very thin and made for boating and primarily cutting on rope and other softer materials. Did it mean the knife had bad steel? No, it simply meant it needed reprofiled. After the reprofile was done it slices up there with my better carving knives.

Sometimes the edge keeping can drastically change on a knife that is reprofiled to a steeper bevel angle so it isn't always a good idea. In the Boye knife it has had little effect on it from what I can tell but I have not tried to cut 1500 cuts in rope with it since reprofiling it either.
 
hardheart said:
The knife rags do very similar tests on knives, but I haven't noticed them doing the same set of tests consistently. On one knife test or comparison, they'll slice cardboard, on another they'll forego the cardboard and use rope, on yet another they go for rubber hoses, etc. The reviewer will cut rope with more knives on more tests, but he might not do it for every knife he tests.

Sure, that's what the knife rags do. And, speaking frankly, I think this type of review is only a half-step beyond a review that only consists of a visual inspection and report. A knife "test" in a vacuum doesn't tell me anything. Knowing that "I made 50 cuts of hemp rope and it still shaved!" tells me almost nothing about what I should expect of that knife's performance. Of course, for the magazines, this makes perfect sense: by always testing knives against each other, they always risk irritating whichever knife "loses" in testing, and that means risking advertising dollars.

But, just because it doesn't make sense for a magazine, doesn't mean we here on the internet shouldn't be doing it. In fact, the situation is almost reversed: there's almost no reason for us to go through the work of giving a knife a workout by itself (resulting in nearly-irrelevant, context-free data), when a tiny bit more marginal work of testing a benchmark knife will yield really interesting, useful data. It's why I sometimes get frustrated when a well-meaning knife enthusiast puts a ton of thought and effort into his testing, and misses the simple step of testing another knife along with it -- all that effort for context-less data that's only as fraction as useful as it could be.

One other thing, I don't think everyone has to use the same benchmark (or set of benchmark) knives. The important thing it provide a basis for comparison, in my opinion... not everyone has to use the same benchmark in order for some valid extrapolation to be done.

Not trying to discourage you here, BTW! A set of core tests we all do seems like a reasonable enough goal, though again the tests will have to be supplemented for different knives ... I can't imagine it would make sense to put a Calypso Jr through anything close to the tests you'd want to put, say, an SMF through, or to have the same tests for a Battle Mistress and Deerhunter. But it doesn't make sense to go through all this effort, without coming to some guidance on a testing methodology (obviously, I favor using a set of benchmark knives versus the test knife) ... in fact, I think the methodology may be important than the core tests!

Joe

PS A good example of what I'm saying: over on the Spyderco forum, we've known for a long time about how the Yojimbo and Ronin perform in slicing through meat (simulating a defensive slash). But one guy took it upon himself to re-do these tests with 5 or 6 other knives ... all of a sudden, we had a basis of comparison on what the Yojimbo's tests results are. Suddenly, the results were fascinating, because we could see what kinds of results are really special.
 
Joe Talmadge said:
...I feel that we can't use a list of "standard" test media as a basis of comparison due to media variation, nor can compare one tester's results to another (or even the same person's results at different times), due to differences and changes in skill.
Media variation can be taken into account as it is easily measured by repeating the same work with different samples. You can also eliminate the user with a few simple restrictions on the cutting. To be clear, I don't think this should represent all the work done as user variance is actually important, I think you want to see how a knife handles for a older more skilled user and in physically more capable but less experienced hands.

However I don't think mandating a set of uniform tests is constructive because it is mainly a hobby so telling guys what to do is more than a little ridiculous. I appreciate any work that people share which isn't blatent promotional hype. What they actually decide to do should be chosen based mainly on what they enjoy and the materials they have at hand.

When I lend out knives all I tend to say is try to take note of what you are cutting and inspect the knife periodically. Everyone can offer information of value no matter what you do, just take some time so as to make the comments meaningful. There are lots of things that I do which would not be practical to many because they simply don't have acres of wood they can cut, nor access to stock lumber through a family construction business.

So in general, I would suggest instead of seeking to create some kind of uniform criteria, focus on doing solid work and iinspire by example rather than try to force into action. Here are a few basic guidelines :

-use more than one knife to allow meaningful performance evaluations

-repeat the work when possible to make conclusions more robust

-try to work with as many different types of materials as possible

-experiment with different methods/finishes

So for example if you wanted to evaluate a blades performance on rope you could :

+use different types, and more than one roll of the same type
+use another knife as a comparison
+do the cutting more than once to raise accuracy and precision
+try different methods : push/slice, coarse/fine edges

There is really no limit on what you can do, just what you have the time for and inclination to achieve.

-Cliff
 
I was kind of thinking that if several people tested a different knife each, but attempted to do it with as close to the same criteria, it would give a good approximation of comparison testing. You could run tests on your Endura, I could do the same with my SERE, someone else could do the same for a Sebbie, and so on, and we could see how each knife performed. There would of course be differences in opinion and some variance in materials/execution, but that doesn't seem all that bad, actually. If a few people all cut rope with an Endura, but got drastically different results, it could be a learning and sharing experience not only in the performace of that knife, but how we individual knife users feel about blade performance and what the proper way to go about our tasks that require a knife. It could also help to show how much weight should be placed on a test like rope cuting, if a really drastic change in results occurred because the rope itself was not consistent between tests.

And I just thought of something else-what if everyone on a passaround all agreed to perform a particular set of cutting tasks, and each reported their results? Then you have the same knife, and what should be the same tests, being affected by user differences, how they prefer to handle the blade, what their touching up of the edge does to cutting performance, a variety of opinion on design and execution with the exact same knife.

I would also like to see a basic, edc cutting chore list come from this. As you said, some knives can be expected to do things other will not. But every knife is designed to cut, and in the realm of folders, I think there should be a set of cutting tasks every knife that sees time as an edc should be able to perform without question.
 
hardheart said:
I was kind of thinking that if several people tested a different knife each, but attempted to do it with as close to the same criteria, it would give a good approximation of comparison testing.
It is possible to be informative this way, but without common baselines you need to be really rigerous and constrained in your method. For example have four different people of four different strength levels carve wood with four different knives. It is impossible to tell based on the results which one actually cuts better. In order to get meaningful results from work of that type you would have to look at it from the point of view of statistics and get a decently large sample.

-Cliff
 
Cliff Stamp said:
It is possible to be informative this way, but without common baselines you need to be really rigerous and constrained in your method. For example have four different people of four different strength levels carve wood with four different knives. It is impossible to tell based on the results which one actually cuts better. In order to get meaningful results from work of that type you would have to look at it from the point of view of statistics and get a decently large sample.

-Cliff

That's why I ask hear at the most popular knife discussion website in the world! :D Also, an at least cursory mention of the owner's/tester's best performing blade for that task would be nice (tested blade having some details along with each test score, and past tests on other blades might only be mentioned, or linked). That's really what I was thinking of for this testing criteria. Someone could get a new knife and perform these simple cutting tests. They could then grade the blade on each test, using subjective feelings of comfort and effort, plus some numbers that we could maybe agree on-like how many cuts of 1/4 manila would be an A for an edc blade, or what to grade a blade whose edge rolls while whittling 1" dowel from Home Depot. Anyone else with the same knife could do the same, if they chose to, and as different users did this with different knives, the comparison would naturally generate. If 10 people give knife X an A for cutting cardboard, then maybe it's a little better than knife Y, that 8 people gave a C+. Also, having a variety of people give results would be nice, I would think. How a gentleman that might be 4 inches taller and 50 lbs. heavier than me feels about a knife may not be very beneficial to me. Conversely, my test of another knife might not help him either. But even a fraction of the Bladeforums members could provide a good cross section.

This is all pretty low level stuff, though. I think the most intensive testing is left in the hands of someone like yourself. It's hard to see happening, but I'd just like a clearer, more consistent review pattern, because even knives which are generally considered to be of the same purpose are tested in different ways.
 
Personally I'd like to see video or even short video clips like from a cheap digital camera of testing and cutting performance and comparisons besides still shots, especially from those that are doing the most 'intensive' testing and reviews.
 
hardheart said:
...at least cursory mention of the owner's/tester's best performing blade for that task would be nice


...

get a new knife and perform these simple cutting tests. They could then grade the blade on each test, using subjective feelings of comfort and effort, plus some numbers that we could maybe agree on
Yes, I have been thinking of adding this to the reviews, because it makes them a lot more meaningful, I have been thinking of either a rating out of 5 or ten with a few different aspects as you mentioned.

A complication is which blades to include in the rating, either the best for that class, or the best in general, or possibly two ratings, a class and an open one.

-Cliff
 
Personally, I test my knives on the type of things that I usually cut. That means cardboard and some whittling. Having direct comparisons to known knives will hopefully help others, since, as has been noted, there is a fair amount of variation in materials. So if, for example, I test my Buck Mayo, I will test it against other knives that I have (e.g., Spyderco Native and Calypso Jr, Benchmade D2 mini-Grip, etc) so that anyone else can at least get a sense of relative performance. My results may not always be the same as others, but at least the direct comparison, cutting the same boxes and the same piece of wood with the different knives, gives a frame of reference. Otherwise, just testing one knife leaves too many questions unanswered. Just remember that my cutting technique and edge angles will probably not be the same as yours, and your results may vary from mine.
 
lambertiana said:
... edge angles will probably not be the same as yours...
This is probably the biggest issue of contention in knife performance, and why you can see such radically different perspectives at times. Heat treating variances can also induce similar problems, but geometry, especially edge thickness and angle can vary a lot even in high end knives and small changes can make radicaly influences as percentage wise they are quite large.

-Cliff
 
Back
Top