Rope slicing test CPM D2 vs. D2

Very well said, Sal. Mike - I certainly understand the time thing.

And I'm with you, STeven.

I too like any testing (though it pains me to see the random beating on knives with hammers). It will be interesting to see what conclusions are made after the data has been examined.
 
...anything to add consistency or get more accurate results is good in my book.

Tie the cord across an open space under a given tension. Lift it a set height with the edge and now draw the blade to cut the cord. This eliminates the variable of the hand force pulling down as well as angles, but it takes a LOT longer to do. You can also do the same and just draw the blade on top of it with a given force with the whole thing on a scale, this then is very similar to the general hemp cutting only with a much smaller piece of rope.


I really applaud your testing, that should be fun, but a lot of work. I look forward to the results.

As do I, that is really an insane amount of work though. I would be exceptionally pleased if people just did one angle at one finish for a couple of runs with the two knives given. If everyone starts being this productive then we will have books written in no time.

-Cliff
 
There really is no need to tie the cord off; just use the old boaters trick a single wrap around something will prevent the hand influence on the cord. If it still slips too much 2 wraps around should do the trick.
SODAK are those per side or total angle? I’d suggest since the Sharpmaker is 15 and 20 per side those seem to be the most common used. I personally use 10 per side a lot and think that is a good one to test also, just because I use it. :)
 
Tie the cord across an open space under a given tension. Lift it a set height with the edge and now draw the blade to cut the cord. This eliminates the variable of the hand force pulling down as well as angles, but it takes a LOT longer to do. You can also do the same and just draw the blade on top of it with a given force with the whole thing on a scale, this then is very similar to the general hemp cutting only with a much smaller piece of rope.
-Cliff

That would take maybe 4-5 times more time, at least, but would be more accurate and help to eliminate technique errors and bias. Being able to just lift the twine, slice it off, then go directly to the next cuts as I did definately streamlines the process compared to tying off and tentioning a new piece each time, but if someone (or myself in my spare time) proves a different method such as this to be substatially more accurate it may be worth the extra time and effort to change the way I did this test. Either way, since the results I got were going along with my other informal newsprint and shaving tests, as well as how clean the rope was being cut, so I was confident with my results cutting the twine as I did, and it fit in with my limited time. I look forward to Sodak's results to see what differences we see between our tests, then we can try to figure out why or if we have sgnificant differences, and we can go on from there on improving our testing.

Mike
 
Either way, since the results I got were going along with my other informal newsprint and shaving tests, as well as how clean the rope was being cut, so I was confident with my results cutting the twine as I did, and it fit in with my limited time.

Yes, one other thing you can do is to simply have someone else perform a certain amount of cutting and not tell you and then you continue and keep measuring the sharpness. This totally prevents you from having any artifical forcing of the results to be consistent as you have no idea what they should be anyway. if your results still match then you can be confident that you are giving objective data.

-Cliff
 
Do you feel that an "objective" testor(Cliff) being provided knives-gratis, compromises integrity...something that you have a great deal of, and know about?

Hi STeve,

Objective conclusion from a subjective point of view is an oxy-moron. Illogical. Not possible. We simply do the best we can.

I help many "truth seekers" to do all kinds of testing. They share results, I get input. I also do my own testing. In the end, I have my "subjective" opinion of an "objective" test. Just like you do.

Humans invented the Internet. Many millions of humans fed all possible "knowlede" into the Internet. The sum total of all that is. Objectivity!

Then one day, a human asked the Internet, "Is there a God?"

The Internet replied, "There is now!" :p

Civil argument is good. "Truth" is an elusive shadow.

sal
 
I think it is certainly valid to point out the nature of subjectivity in any analysis, but I think there is no need to go so far as to prevent statements of objectivity. Sal raises an interesting point though about individuals viewpoint of analysis being inherently subjective in many cases unless they are proceeding on strict rules of math, which is rarely the case. This however is a meta-discussion removed away from the methods and more on how they are viewed/reacted to. Interesting though all the same. As a case in point, imagine a piece of work which showed low edge retention on a cheap fantasy knife and a knife by Fowler. The exact same methods, exact same analysis, by the exact same person. In one case people could readily accept it and in the other they find fault and attempt to condemn it. This of course is not a judgement of the data but those doing the critizing, standards have to be uniform or else they are not standards obviously and if your criticism is not standard then it is amusing at best to start talking about bias.

-Cliff
 
Thanks for all the input, guys.

Db, it's per side. Here's what I'm proposing. I'll do one run of cutting, and post pictures on photobucket of the edgepro setting, the actual type of cutting test, and the rope and cutting that I do. I will include the SKU's of all the material and their sources. My goal is to be as transparent as possible, so that *anyone* can do the exact same test, if they desire, or as close to exactly the same as possible.

I'll do one run, and post, and wait a couple of days. Some of you guys will have suggestions that will improve what I'm doing, and I'll incorporate them. I don't want to finish the testing, write it all up, ship the knives off to the next tester, and -then- have someone say, "Hey, you could have done XX!" At that point, I'll slap my forehead and say, "I coulda hada V8!" :D

I look forward to your comments, but even more so to good criticism. I have no dog in this fight, although they are nice knives, I'm not planning on buying or selling either one. I have no affiliation with either maker. The hardest part for me to be objective is trying to forget the previous results and see what happens.

I'll post the results in a couple of days, am hoping to get started tonight or tomorrow.
 
I'll do one run, and post, and wait a couple of days.

That sounds good, however I would caution significantly about juding results from one run. I have seen a lot of variability in that type of thing myself so take that into account and not be too hasty to draw conclusions. Of course observations and suggestions on method would be welcomed as early as possible for reasons you noted.

-Cliff
 
I think it is certainly valid to point out the nature of subjectivity in any analysis, but I think there is no need to go so far as to prevent statements of objectivity. Sal raises an interesting point though about individuals viewpoint of analysis being inherently subjective in many cases unless they are proceeding on strict rules of math, which is rarely the case. This however is a meta-discussion removed away from the methods and more on how they are viewed/reacted to. Interesting though all the same. As a case in point, imagine a piece of work which showed low edge retention on a cheap fantasy knife and a knife by Fowler. The exact same methods, exact same analysis, by the exact same person. In one case people could readily accept it and in the other they find fault and attempt to condemn it. This of course is not a judgement of the data but those doing the critizing, standards have to be uniform or else they are not standards obviously and if your criticism is not standard then it is amusing at best to start talking about bias.

-Cliff
Cliff, If any Engineer/Technical person working for me wrote that this mess I would make them pay at the next performance review. You use buzz words and overlapping themes to impress your readers. More often you end up making professionals doubt you because when we see things like this at work it is normally to cover up a total lack of understanding. This was about whether you can be objective comparing two items when you buy one and get one free. Remember that, at least in the technical community, one must avoid all appearance of bias for one's views to be taken seriously. Steven
 
This was about whether you can be objective comparing two items when you buy one and get one free.

That was the main point which I addressed origionally. The thread then diverged into the general idea of inherent subjective bias, which while being a point of consideration is not an complete certainty or in any case an issue without a solution.

-Cliff
 
I am still on the fence regarding the test methodology. I am more concerned about this than possible bias, at least in the pre-analyzation stage.

Trying to measure the amount of edge used during a slice with your eyeball seems like a difficult measurement to take. I also have issues with the uniformity of force applied from one measurement to the next, as well as lateral forces and possible cord wrap of the edge during the cut. There is no verification that the slices are made at the same speed either, and in the best theoretical model I have seen for slicing, velocity of the slice is a critical factor in the force needed to make the cut. As the speed of the slice goes very low, you are measuring less slicing ability, and more push cut force.

Regardless, I'll look forward to seeing results.:)
 
Broos I also believe this method is probably less reliable than the methods they criticized. However, I don’t have a problem with their method as long as they honestly report the method and their results.
 
I am still on the fence regarding the test methodology. I am more concerned about this than possible bias, at least in the pre-analyzation stage.

I was worried about my measurements when I did the test, but like I said the informal measures matched up good. Time will tell if other's tests confirm or disprove my results. I talked to Phil Wilson about making the scale test better, and he had some great suggestions, but I really think one of my ultra thin Krein regrinds in a less wear resistant steel will beat out a thicker, more wear resistant steel due to the wedging forces in that test, when in truth the thicker knife would have better edge retention in slicing than the little knife, but the smaller knife has better cutting ability. I think that is why Goddard uses identical geometries on the test knives if possible. With that test it is a good test of how each individual knife will perform against each other IMO, but not neccesarily tell you which steel has better slicing edge retention unless the knives are of similar geometry. This is where the optimal geometry becomes interesting, because if one steel can do the same work as another without taking damage at thinner geometry, yet has less wear resistance, you then are comparing if the geometry advantage of the thin knife can overcome the wear resistance advantage of the thicker knife. So many tests to be done, but so little time!

Mike
 
I was worried about my measurements when I did the test, but like I said the informal measures matched up good. Time will tell if other's tests confirm or disprove my results.

I agree this would go a long way in determining whether the measurement can be taken consistently.

I'd harp some more, but I don't want to be too critical. I think it's great that you're testing, & double great that you're sharing your results so AR critics like myself can take a whack at 'em!
 
I'd harp some more, but I don't want to be too critical. I think it's great that you're testing, & double great that you're sharing your results so AR critics like myself can take a whack at 'em!

Hey, do some testing yourself and maybe you can come up with some good suggestions! You are obviously a bright guy, and any other testing or methods to look at and consider can only help me to improve the limited testing that I try. It is a lot of work and always imperfect, but hopefully if enough of us do some testing we can come to understand our cutting tools and their performance better.

Mike
 
I was worried about my measurements when I did the test, but like I said the informal measures matched up good.

One thing to be careful of when you are taking measurements is that you may, even unintentionally, force results to match previous data. At a minimum, do not look at your last run measurements while recording the next. Even better do the runs at different intervals. Best is to have someone else do the cutting and you do the measurement of sharpness, at least in part.

I know you may be very confident that you are being totally objective, and in fact you might actually be, however it never hurts to check and the first two alterations to method take no extra time. Using different intervals in subsequent runs complicates the analysis a little bit, but nothing serious and I can handle that anyway for those who do not love number crunching. I will release the code to do it shortly with a proper algorithm description.

I talked to Phil Wilson about making the scale test better, and he had some great suggestions, but I really think one of my ultra thin Krein regrinds in a less wear resistant steel will beat out a thicker, more wear resistant steel due to the wedging forces in that test, when in truth the thicker knife would have better edge retention in slicing than the little knife, but the smaller knife has better cutting ability.

Yes, this is a test of the total cutting ability of the knife so it is dependent on sharpness and geometry because the force you are measuring is just the sum of the wedging force plus the force on the edge and only the latter is dependent on sharpness. It is possible to extract the sharpness and edge retention by modeling it (or by looking at differences) but this will never be as accurate or precise as measuring it directly. Of course it is much faster just to do hemp cutting with no external sharpness measurements, many times over.

This is where the optimal geometry becomes interesting, because if one steel can do the same work as another without taking damage at thinner geometry, yet has less wear resistance, you then are comparing if the geometry advantage of the thin knife can overcome the wear resistance advantage of the thicker knife.

Yes, this is the arguement put forth by Johnston in the late ninties who noted that 1095 vs ATS34 at very thick edges/ angles (20+) was very different than the same comparison when the angles were less than 10 degrees per side and the blades were less than 0.020" at 1/4" back from the edge.

ATS34 is a very high wear steel, significantly more so than 1095 by a large amount, but when the edges were very thin and acute the large carbides in ATS34 (about 25 microns) just broke out of the edge in use and in fact lowered its wear resistance. Landes later measured this directly with his edge stability data.

-Cliff
 
I have a hard time believing there is a significant wedging force when slicing off thin slices from the end of a half inch manila rope. I do agree no test is perfect and the more tests the better. As testing is fun and interesting it can be misleading. I think the best way to learn and understand about your knife and how it performs is to simply use it for what you use a knife for.
 
Back
Top