My List of Steels in order for Edge Retention So Far

vassili i have posted in the testing section since last fall. used to cut cardboard of same strength & by clock see when a knife quit shaving hair. after cutting cardboard for 23 minutes & the m4 mule still shaving hair , i decided to try other methods for the sake of brevity & effort. since then most cutting has been on 275 lb. edge crush test cardboard. i draw guide lines & count the cuts still i can hardly shave hair. lines on the cardboard to make all cuts the same length. these tests have involved c.s. voyager in vg1, spydies in vg10, zdp189 & s30.some knives were blunt & i had to rebevel so not all bevels were identical. also i threw in queens d2 & gec 1095. presently the most empirical results were derived by being able to test spydies in ffg.vg10--zdp & s30. models were stretch, millie & enduras. although not as empirical as knarfengs sisal rope tests ; my general results were somewhat validated by franks sisal cutting. although cardboard cutting may not be as empirical as some other mediums ,i have found direct correlations between cardboard & performance on hogs & whitetail deer processing in the field. i do'nt have the ability to measure edge bevel angles & width of edges behind the cutting edge. nor can i know the actual rockwell reading that knarfeng can obtain but over 30 years of testing knives on cardboard does give a person some pretty good knowledge on alloys & heattreats by various makers. m4 followed by zdp189 with gec 1095 , vg10 & vg1 leads me to conclude that all these alloys used in the field will feel [in hand] close in performance. spydie m4 in the mule was noticibly stronger in edge durability.queens d2 seemed to lose initial keeness sooner but after a certain point it seemed to dull no further.dennis
 
I find it puzzling that the tests by Ankerson and Nozh produced such widely varying results. It seems ZDP-189 is the only consistent performer. :confused:

I'd be VERY surprised if their results were the same.

as they say "one test is worth a thousand opinions"dennis

I think the opposite. When something is called a test, yet multiple tests trials produce different results, what the results are telling you is that your test method is a review. When it comes to actually trying to determine a characteristic of the steel, I'll take one test result using a test that provides repeatable results, using a method that was actually designed to provide the answer to the hypothesized question, over thousand "tests" that don't meet that criteria.

Nozh is testing edge retention

It depends on how you define edge retention. If you define it as "Force required to push cut a thin thread versus wear induced by cutting manilla rope using a 1" slice, using test knives sharpened at the same edge angle, but with unequal bevel thicknesses, primary grind angle, & blade curvature", then I would agree with you.

All that said. I enjoy seeing stuff being cut and listening to opinions on what was found from it. I think that some of our best information on knife and steel performance comes from anecdotal experience. What bothers me is when the tester posts here, then gets mad because someone point out why their test is not measuring what they think they are measuring. To come up with the method, and to interpret the results accurately, is something that has nothing to do with the act of testing. No amount of testing is going to help answer those questions, because they are 100% dependent on knowledge and thought.
 
It depends on how you define edge retention. If you define it as "Force required to push cut a thin thread versus wear induced by cutting manilla rope using a 1" slice, using test knives sharpened at the same edge angle, but with unequal bevel thicknesses, primary grind angle, & blade curvature", then I would agree with you.

I never said I agree with his tests. I simply stated what each person felt they are "testing" in what they are doing and that Jim is judging based on different characteristics then Vassili.
 
I never said I agree with his tests. I simply stated what each person felt they are "testing" in what they are doing and that Jim is judging based on different characteristics then Vassili.

Exactly, since I am not dulling the knives to the point they won't cut anymore the results will be very different.

Some steels develop a working edge that will keep on cutting for a very long time while others don't. To test them to the point of being dull on video would take a very long time as in the videos could be over an hour long for each steel.

I use cardboard because it's abrasive enough to show what steels will hold that keen razor sharp edge and those that didn't within a 5 minute time frame.

My tests are not the last word, nor are they ment to be for that matter because there are variables as we all know.

The bottom line really is this is what I got for results, others most likely will vary. Only knife companies have the kind of R&D money required to do scientific repeatable testing like Spyderco does.

Now if I had a few hundred grand or more to burn then maybe I or someone else with that kind of financing could do it all on Video.

Short of that I don't see it happening since I haven't seen any knife companies putting out video on those types of tests comparing different steels on a broad scale.

Something like this maybe:

20 of each blade Steel at the exact same hardness all with the exact same grind.

But then we would need different hardnesses too so that would be 20 of each of those too.

Then produce a testing environment with equipment and a consistent testing media.

Video the whole process

We would also need Scientists too and they would have to be paid.

Tally up all the data...

The expense would add up very quickly, likely more than $500,000 by the time it was done or even a lot more.

I don't know anyone that has that kind of coin to blow on something like that.

So people can whine about repeatable scientific testing all they want, but until someone with A lot of MONEY really does it then it's all relative.
 
Last edited:
I'd be VERY surprised if their results were the same.



I think the opposite. When something is called a test, yet multiple tests trials produce different results, what the results are telling you is that your test method is a review. When it comes to actually trying to determine a characteristic of the steel, I'll take one test result using a test that provides repeatable results, using a method that was actually designed to provide the answer to the hypothesized question, over thousand "tests" that don't meet that criteria.



It depends on how you define edge retention. If you define it as "Force required to push cut a thin thread versus wear induced by cutting manilla rope using a 1" slice, using test knives sharpened at the same edge angle, but with unequal bevel thicknesses, primary grind angle, & blade curvature", then I would agree with you.

All that said. I enjoy seeing stuff being cut and listening to opinions on what was found from it. I think that some of our best information on knife and steel performance comes from anecdotal experience. What bothers me is when the tester posts here, then gets mad because someone point out why their test is not measuring what they think they are measuring. To come up with the method, and to interpret the results accurately, is something that has nothing to do with the act of testing. No amount of testing is going to help answer those questions, because they are 100% dependent on knowledge and thought.
You and Mastiff are correct, as usual.
 
we eagerly await broos in depth cutting reviews. all the oration in the world is nothing but air. please mr. broos demostrate your empirical methods.dennis
 
Im particularly interested in knowing which knife was used for the final results on the S30V placement.
 
we eagerly await broos in depth cutting reviews. all the oration in the world is nothing but air. please mr. broos demostrate your empirical methods.dennis

Every time someone starts talking about repeatable results or scientific testing results I start thinking something like I posted above....

It's all about $$$$$$$$$$$$$$$$$$$ and lots of it..... Hundreds of thousands of dollars and depending on the subject it could approach a million..

Labs aren't cheap.

Equipment isn't cheap

Scientists aren't cheap

Making a Video Documentary isn't cheap, it would be like a Mini Series on TV.

The testing materials needed to get a repeatable average aren't cheap.

That's the reason why start laughing every time someone brings it up...

It's just not realistic thinking... That is unless someone wants to start cutting huge checks and sending them to the testers....
 
First, thanks for the effort. Especially with being consistent on edge angles :)

I have not used some of the steels in your list, but VG-10 vs. vg-1 was fairly recent comparison for me, about 3 hour cutting session in the kitchen for each, and VG-10 came on top, no surprises there.

What is really puzzling CPM M4 being somewhere in the middle. It would really help if you posted specified HRC for test blades. At least in my experience, 1HRC can make noticeable difference in long run, and M4 is HTed anywhere from 60 to 65HRC as far as I know.

I have one M4 knife from Phil Wilson, 64HRC, thin blade, and that thing cuts cardboard forever, retaining shaving sharp edge.

Don't get me wrong, I use that shaving sharp term a lot myself, but it's really unreliable method to compare sharpness of two different knives, let alone steel performance. As I get better at sharpening I can achieve that shaving sharp edge with lower and lower grits. On some steels I can get that level with 700 grit bester whetstone, but for most 1000-1200 grit. Obviously that's magnitudes below 32K or 100K edges.

I still don't know fast and reliable method to accurately measure sharpness though :) Especially for push cutting.
 
First, thanks for the effort. Especially with being consistent on edge angles :)

I have not used some of the steels in your list, but VG-10 vs. vg-1 was fairly recent comparison for me, about 3 hour cutting session in the kitchen for each, and VG-10 came on top, no surprises there.

What is really puzzling CPM M4 being somewhere in the middle. It would really help if you posted specified HRC for test blades. At least in my experience, 1HRC can make noticeable difference in long run, and M4 is HTed anywhere from 60 to 65HRC as far as I know.

I have one M4 knife from Phil Wilson, 64HRC, thin blade, and that thing cuts cardboard forever, retaining shaving sharp edge.

Don't get me wrong, I use that shaving sharp term a lot myself, but it's really unreliable method to compare sharpness of two different knives, let alone steel performance. As I get better at sharpening I can achieve that shaving sharp edge with lower and lower grits. On some steels I can get that level with 700 grit bester whetstone, but for most 1000-1200 grit. Obviously that's magnitudes below 32K or 100K edges.

I still don't know fast and reliable method to accurately measure sharpness though :) Especially for push cutting.

Whatever the M4 Mules are... :confused:
 
Well, what can I say...
I got rehardened 154CM blade, which now is 62HRC, and it outcuts M2 at 59-60HRC by significant margin, same for S30V at the same hardness and I am sure it will outcut 110V and few others at 58-60HRC. I wouldn't consider 154CM a better steel per-se.
It's knife vs. knife.
 
vassili i have posted in the testing section since last fall. used to cut cardboard of same strength & by clock see when a knife quit shaving hair. after cutting cardboard for 23 minutes & the m4 mule still shaving hair , i decided to try other methods for the sake of brevity & effort. since then most cutting has been on 275 lb. edge crush test cardboard. i draw guide lines & count the cuts still i can hardly shave hair. lines on the cardboard to make all cuts the same length. these tests have involved c.s. voyager in vg1, spydies in vg10, zdp189 & s30.some knives were blunt & i had to rebevel so not all bevels were identical. also i threw in queens d2 & gec 1095. presently the most empirical results were derived by being able to test spydies in ffg.vg10--zdp & s30. models were stretch, millie & enduras. although not as empirical as knarfengs sisal rope tests ; my general results were somewhat validated by franks sisal cutting. although cardboard cutting may not be as empirical as some other mediums ,i have found direct correlations between cardboard & performance on hogs & whitetail deer processing in the field. i do'nt have the ability to measure edge bevel angles & width of edges behind the cutting edge. nor can i know the actual rockwell reading that knarfeng can obtain but over 30 years of testing knives on cardboard does give a person some pretty good knowledge on alloys & heattreats by various makers. m4 followed by zdp189 with gec 1095 , vg10 & vg1 leads me to conclude that all these alloys used in the field will feel [in hand] close in performance. spydie m4 in the mule was noticibly stronger in edge durability.queens d2 seemed to lose initial keeness sooner but after a certain point it seemed to dull no further.dennis

We need to put it in some table or rating and better have this rating numaric so it will be easy to compare. It will be also good to have method defined step by step so other people will may reproduce it.

In your case let me just put it this way:

1. CPM M4
2. GEC 1095
3. VG10, VG1

Something like this.

Thanks, Vassili.

Let me just add all ratings we have. You may see that when they are al in one place for side by side comparison it is already more useful.

ZDP-189
VG-10
CPM 154
CPM S35VN
VG-1
CPM M4
CTS-BD1
Duratech 20CV
CPM S30V
AUS-8A
SR-101
INFI

I have:

5. Yuna Hard II ZDP-189
6. SwampRat SR101 (52100)
8. Spyderco Mule CPM M4
15. Buck CPM154
17. Buck CPM S30V
20. Great Eastern Cuttlery 1095
21. CTS-BD1
24. Kershaw CPM S30V
25. SPyderco Mule CPM S35VN
28. Busse INIFI
36. CRKT AUS8
 
Last edited:
Every time someone starts talking about repeatable results or scientific testing results I start thinking something like I posted above....

It's all about $$$$$$$$$$$$$$$$$$$ and lots of it..... Hundreds of thousands of dollars and depending on the subject it could approach a million..

Labs aren't cheap.

Equipment isn't cheap

Scientists aren't cheap

Making a Video Documentary isn't cheap, it would be like a Mini Series on TV.

The testing materials needed to get a repeatable average aren't cheap.

That's the reason why start laughing every time someone brings it up...

It's just not realistic thinking... That is unless someone wants to start cutting huge checks and sending them to the testers....

As I sad before - any testing is scientific.

Please, do not buy this crap about science being only done in lab with "scientist" in white robes and super expensive equipment - it may be true for modern Nuclear physics, but at the beginning Maria Kuri boil radioactive ore in big pot. In Edge Science area researches not need to be like modern Nuclear Physics yet.

This is usual response from people who simple have no idea what they talking about and just like to argue our test results because they have no other arguments, but can not accept the fact that their favorite steel is not best of all. I would say that what we are doing is about how science was done 100-200 years ago.

Now there is no part of science which learn and describe edge behavior, we are first who just start researching this - collecting facts. They better be reproducible and in good amount and numeric - so it will be easy to analyze. This is first step in scientific methodology.

When we collect enough facts we may came up with some theory, which will explain all collected facts, but still then need to be proven. It need to predict some behavior not known before. This is what next step is.

Once it is done those behavior need to be tested by independent teams and tests need to be positive. At that point it will be proven theory.

What we have right now in "Edge science" - absolutely nothing. We do not even have enough facts to analyze them. Those facts we have are not in good condition to analyze (apply math).

So if you try to do you tests with full description - so any will be able to repeat them, and try to do numerical measurement - it will be very scientific. It require some effort of course, but it not need to be same as Nuclear physics, I would say it is chance for everyone to contribute to Edge Science process.

What we have now about Edge - mostly speculation fueled by marketing departments. Some metallurgical charts and test results not related directly to edge holding.

Thanks, Vassili.
 
Ok, now I'm confused - should I get a Sebenza or not? :confused:

Why do you want to turn this thread into flame?

Testing blades is quite an effort and knife community need to have some clarity in edge retention. There are not too many of us and we here trying to support each other. Your comment may derail it to usual flame.

Thanks, Vassili.
 
Ok, try this on with no flames attached - convex your blade edge, and use D2 or G2 steel; this should work for about 99% of your uses, the knife will be easy to resharpen without a rolled edge or metal loss, and you'll have plenty of time left over to fish...fair enough?

p.s. You didn't answer my question...
 
Ok, now I'm confused - should I get a Sebenza or not? :confused:

How about you get a clue.

Just because you disagree with a thread doesn't mean you should derail it with a somewhat trollish response.

Some people here are actually interested in the results however unscientific they may be.
 
vassili the cardboard tests were conducted in several posts over several months. i listed 'my"results for each test at that time for the knives & alloys used. while i agree it's nice to arrive at an informative posted scale i did the tests for my own understanding of the knife--alloy quality. as i always like to mention:
tests are'nt strictly empirical
i do these activities for a hobby not a profession
no conclusive written in stone facts are declared
i simply try to pass on my experiences to other members whom can read or not read them
i depend on members whom read these to retain which or which not info they desire
lets be candid--my next door neighbor might do same tests & get different results.
in my view if enough members get off their keyboards & start actually cutting instead of theorizing, over a period of time we will get some balance of experiences that give the novice a point of departure to make intelligent choices.vassili you have done a massive amount of testing , much of which i've appreciated but all of these are conjecture. the members are probably more concerned that if they choose a knife it should be able to perform the jobs it was designed for. whom wishes to get in the boonies with a 400 lb. pig on the ground & have to sharpen the blade 10 times to get the job completed. this is of some concern if one is kneeling in mud & it's 30 degrees.i'm not fighting anyone but only wish we would see more people actually cutting instead of looking at charts.
 
I think this thread is fascinating -- and I really am grateful to everyone that spent their time and money putting this together for the rest of us...

But, I suspect the order could change up a bit if you 'took the gloves off' and let each steel run the test with its optimal geometry (if 30 degrees isn't already the best for it).

The Ford/Chevy rivalry between 154cm and s30v might turn out a little different if you let s30v run at a geometry where it's higher edge stability could be taken advantage of...

That said I really do appreciate this a lot and you guys rock.
 
No need to get your feelings hurt, dennisstrickland. I just disagree with your opinion that any testing in and of itself is better than other knowledge or theory.

I do a lot of cutting also, feel strongly about the impressions I have from cutting and sharpening, but at the same time realize that my impressions are not totally objective. It is not easy to come up with objective testing, and really doing it right would require much thought and some investment. This is a hobby for me also. I'd love for someone to pay me 100K a year to test knives, but am not expecting any offers, so if you don't mind I'll continue to offer my opinion periodically.

Feel free to disagree with me, but once again the "you have to do a test to offer an opinion of a test" argument does not do you or me justice, and hopefully you can offer up an actual point that describes where exactly you disagree with me. Did you read the last paragraph in my post?
 
Back
Top