New testing session.

Bumping for the added 1095 test/review.

Also, Broos, aren't you in a way asking about the edge stability of the steels? Like how much side loading there is on the edge, and how much that affects edge retention? Interesting considering other threads.

Anyway, a couple 4 period moving averages, just cause I don't know what good the charts really are with the data as it is.




the sharpness at 200 cuts seems to move downward (improve) and scatter less, while sharpness at 0 cuts doesn't change a lot, but decreases then increases.

Vassili, can you comment on any changes you've made over the course of the testing?

rope
thread
rope cutting jig
thread cutting jig
scale
sharpening technique
sharpening equipment
timeframe
procedure
 
Last edited:
Bumping for the added 1095 test/review.

Also, Broos, aren't you in a way asking about the edge stability of the steels? Like how much side loading there is on the edge, and how much that affects edge retention? Interesting considering other threads.

Anyway, a couple 4 period moving averages, just cause I don't know what good the charts really are with the data as it is.




the sharpness at 200 cuts seems to move downward (improve) and scatter less, while sharpness at 0 cuts doesn't change a lot, but decreases then increases.

Vassili, can you comment on any changes you've made over the course of the testing?

rope
thread
rope cutting jig
thread cutting jig
scale
sharpening technique
sharpening equipment
timeframe
procedure

There is ZDP189 retest - can you connect results for that - I guess it will demonstrate drift in time better.

rope - I bought it time to time in different local stores - OSH and Lowes and HomeDepot. It seems that at least last time I was doing it I bought out all 1/2" manila rope stock and had to search further and further. Once I left rope outside under rain and noticed that it is easyer to cut - so I did not used that one and keep all bunch under roof. One I have now have some green thread in it - but it is just painted manila rope I think.

Thread all same - some "Aunt something" from Wallmart.

I lost my old jig and recently made new one - better I guess:

new-base.jpg


New thread cutting jig (major improvement -much less stress on wrist I have ow) after Dec 14:

http://www.youtube.com/watch?v=1Ftbus7LbtU

scale is same - see that video.

sharpening technic - same, but I think I get better

sharpening equipment same

I now stop at each point and count to 5 - moving down step by step.

I stop last year testing session Dec 14.

Thanks, Vassili.
 
Just finished second run (resharpen same knife and cut 400 time rope again):

Cuts Oz.
000 - 0.5 0.5
001 - 2.0 2.5
010 - 3.5 3.5
050 - 5.0 4.0
100 - 4.5 4.0
200 - 4.0 4.5
400 - 4.0 4.0

So it seems like results quite stable. I will do another run to have three of them (if my arm will feel OK) and then to see if indeed I just got too good in cutting-testing., I shall retest some poor performing steel from year old testing.

Thanks, Vassili.
 
here is the first and second zdp test

nozh4.png

I meant on your first graph just to connect ZDP dots first and second with straight line. If we assume that results should be same we may correct other results - shift results plane to make this line horizontal kind of linear transformation. However I am not sure if this should be linear...

I think I need to do some baseline testing to do time to time. Like test this 1095 after every 5 other tests. In this case if any drift happen we may correct results for other tests.

To do this however we need to know what is possible variation in results for same steel in tests happened one after another - then we may say that difference in results does indicate "skill improvement" or it is just natural random variation.

Thanks, Vassili.
 
Great tests. Yeah I would like to see some of the older ones retested with the improvement in your technique.

1095 can serve the baseline for all tests. My 1095 blades @ 58 hrc actually wear out pretty quick, but they do have very high initial sharpness. My 1095 @ 63-65 hrc performs like a monster though.
 
yeah, I don't think a straight line would work, the change in the slope if you retested one before or one after would change too much, while I can't see if the actual test results would have.

How different were the two D2 (not friction forged) and S30V knives
 
yeah, I don't think a straight line would work, the change in the slope if you retested one before or one after would change too much, while I can't see if the actual test results would have.

How different were the two D2 (not friction forged) and S30V knives

I don't necessarily expect same results for same steel from different manufacturers. I know for sure that Buck has better heat treatment for same steel then any other manufacturer and do not see why results for it's CPM S30V should be same as for Kershaw's CPM S30V - this difference I think rather expected.

Same with D2 - one is limited run for Benchmade and another made by KaBar who has strong relation with Dozier and made actually in Japan not from same D2 but from Japanese D2. In this case I do not see why it should be same as well.

So this difference I do not really consider as evidence that there is something wrong with test.

It is only for ZDP189 I retested very same knife.

Thanks, Vassili.
 
Last edited:
I would expect the same results if I thought they were using industry standard heat treat, but with Bos doing the S30V, I wouldn't necessarily expect that.
 
I would expect the same results if I thought they were using industry standard heat treat, but with Bos doing the S30V, I wouldn't necessarily expect that.

I know for sure that different manufacturers have different results for same steel.

Also modern metallurgy does not really care about knives too much and develops heat treating process for different application mostly and as well looking at big numbers and mass production, so good craftsman always may improve performance over standard. This is what Paul Bos doing as I understand.

So different results fro different manufacturers for same steel does not surprise me at all.

Thanks, Vassili.
 
How do you know for sure?

I disclose all my process and results so my sureness based on that and so on my opinion I do know this for sure, and everybody know why - it is all written here.

If you start participating this thread can you describe how are you doing steel performance testing in Kershaw? You have R&D and QA. Can you tell as what is industry standards for blade performance testing.

Thanks, Vassili.
 
I disclose all my process and results so my sureness based on that and so on my opinion I do know this for sure, and everybody know why - it is all written here.
Of course you do. All your data, processes, and results on specific articles are all posted. Although I think it gives a decent picture of the samples tested, I question whether a specific steel or any manufacturer generalities (this steel or this manufacturer is better than another...) could be taken from them.

This may be for a different thread, and sorry, I should have been more clear, but was more referencing how you know this for sure?

I know for sure that Buck has better heat treatment for same steel then any other manufacturer

You're basing what on that statement? I apologize in advance if in fact you have factory insight, but unless you do, those broad statements are really not truthful. More probable is that you don't know anything about any manufacturers HTing, including Buck. For accuracy purposes I would caution posters and readers alike when evaluating such statements.

If you start participating this thread ...
I'm going to pass on the thread, and say sorry if this drifted off it's original course. Carry on.
 
Of course you do. All your data, processes, and results on specific articles are all posted. Although I think it gives a decent picture of the samples tested, I question whether a specific steel or any manufacturer generalities (this steel or this manufacturer is better than another...) could be taken from them.

This may be for a different thread, and sorry, I should have been more clear, but was more referencing how you know this for sure?



You're basing what on that statement? I apologize in advance if in fact you have factory insight, but unless you do, those broad statements are really not truthful. More probable is that you don't know anything about any manufacturers HTing, including Buck. For accuracy purposes I would caution posters and readers alike when evaluating such statements.

I'm going to pass on the thread, and say sorry if this drifted off it's original course. Carry on.

My question on how Kershaw testing edge retention for QA (Quality Assurance) and R&D (Research and Development) remains ignored. This way how do you expect me to know any factory insides?

I relay on my tests, my experience and other people reports (and I heart many saying that Buck has best HT in the industry for steels they use). Based on this information I make conclusion - if you have other evidence - please, provide.

But you can not ask me not to make this conclusions just because in theory if I know more, my conclusion may be different - mathematically speaking and at very same time refuse to provide me any information to correct my conclusion.

If you do not have any information which support you point of view - then I may only correct my statement - may be I am wrong but there is no evidence to support this (at least I do not know). So please, if you have some test results or something - why do not you share them.

Regards, Vassili.
 
I don't necessarily expect same results for same steel from different manufacturers.
...
Same with D2 - one is limited run for Benchmade and another made by KaBar who has strong relation with Dozier and made actually in Japan not from same D2 but from Japanese D2. In this case I do not see why it should be same as well.

So this difference I do not really consider as evidence that there is something wrong with test.

from a past thread...

http://www.bladeforums.com/forums/showpost.php?p=6009447&postcount=39

You state here that there are differences in how edge retention of D2 will test out based on steel mfg, HT, lot #, etc., but have argued in the past that any D2 knife made with a "good" HT can be sharpened to an extremely sharp edge despite different steel mfgs, HT's, or knife mfg's.

If you agree that steel properties & charateristics can affect both edge retention and a steel's ability to take a very sharp polished edge (aaaggghhh edge stability like factors!), then there is a conflict, and both of your assertions cannot be true. Maybe your error is due to an inadequate test for sharpness (maybe your test hair is too fat to test for a truly high level of sharpness). I still contend that some D2 does not like a polished edge, and some does.

I tried not mentioning the old thread has another example of the bad conclusion that s30v @ 60 HRc has better edge retention cutting rope than FFD2 @ 65 HRc, but can't resist.

No matter how good a test may be, you can make bad conclusions based on the results. It will happen when one disregards all other knowledge & testing due to their own personal experience. What good is testing if you over extrapolate the results leading to wrong conclusions? This is why when testing you first figure out what you want to find out, then lay out a test method that will logically take you to the desired conclusion that can be defended both by known correlations (or science) & by using other's test results (ie. some sort of hypothesis is required if you ever want to defend your results).

And if someone wants to throw out that lame "do your own testing" response again, I'll state now that that is an really boneheaded argument of those who don't know what the heck they are talking about, and just shows an inability to make a valid argument (sorry, but that is an idiotic & nonsensical argument to put forth).
 
Nobody repeats another person's tests, so there's seriously nothing to correlate to. Most of the tests are lightyears apart in what is being measured, how it is measured, and what knives, cut material, and measuring devices are used. I honestly cannot find any tests with a measurement precision greater than a pound that are in any way similar to another. Some are done by machines, some are done by hand, some cut rope, some cut rubber bands, some cut cardboard, some are slices, some are push cuts, some are rocking cuts, some intermittently measure sharpness by shaving hair, some by cutting thread, some don't and just measure edge loss by force of cut on the abrasive material.

Nothing correlates, no conclusions can be drawn about validity of the tests or the results. We all just fit stuff to our expectations, like FFD2 has more wear resistance than X steel, or 1095 holds sharpness longer than M2. Gonna need some peer evaluation on those, and everything else.

I really like reading about all the testing everyone does, in the industry and by end users. But the more I read, the less I see in the way of verifying test results by independent parties.
 
Nobody repeats another person's tests, so there's seriously nothing to correlate to. Most of the tests are lightyears apart in what is being measured, how it is measured, and what knives, cut material, and measuring devices are used. I honestly cannot find any tests with a measurement precision greater than a pound that are in any way similar to another. Some are done by machines, some are done by hand, some cut rope, some cut rubber bands, some cut cardboard, some are slices, some are push cuts, some are rocking cuts, some intermittently measure sharpness by shaving hair, some by cutting thread, some don't and just measure edge loss by force of cut on the abrasive material.

Nothing correlates, no conclusions can be drawn about validity of the tests or the results. We all just fit stuff to our expectations, like FFD2 has more wear resistance than X steel, or 1095 holds sharpness longer than M2. Gonna need some peer evaluation on those, and everything else.

I really like reading about all the testing everyone does, in the industry and by end users. But the more I read, the less I see in the way of verifying test results by independent parties.

I am have pretty good confirmation on SGPS results from BLUNTRUTH4U. I consider this as a very important proof of my tests validity.

You see it found something which was not known and was not expected - at least by me. And then this alos found independent confirmation by very different test method.

This is important to find something unexpected and have that confirmed.

So I am confident in my tests, I value results and I am going to improve this testing.

Problem is that nobody provide public access to CATRA test results, not too many manufacturers have CATRA machine to do testing, as well and I start thinking that not too many doing actual testing - at least there are no evidences of that.

We have no other information on edge retention and in results have such "surprises" as poor test performance on SGPS and SG2, simple because comon accepted knowledge based not on real performance tests but some marketing handouts and so everybody (includng me) was so excited about this super steels...

Accept this test or not - it is personal choice. But I rather value single independent formal test result then thousands of talks about definition of sureness etc or other word manipulations, without any real data, any real work behind.

Thanks, Vassili.
 
I agree that testing is better than talking, and also that the steel in the U2 doesn't hold an edge for crap, I have had one for a while and found this immediately. But, one test may have problems, and without repeating the test independently, those problems can be hard to identify. Some of your results fly in the face of what we know, think we know, or what others report. Now, where are you, we, or they wrong? It's hard to know because everyone is doing their own thing. It's hard to even find general trends based on hardness, alloy content, edge angle, etc, because nothing is held constant, including the test parameters.
 
Nobody repeats another person's tests, so there's seriously nothing to correlate to. Most of the tests are lightyears apart in what is being measured, how it is measured, and what knives, cut material, and measuring devices are used. I honestly cannot find any tests with a measurement precision greater than a pound that are in any way similar to another. Some are done by machines, some are done by hand, some cut rope, some cut rubber bands, some cut cardboard, some are slices, some are push cuts, some are rocking cuts, some intermittently measure sharpness by shaving hair, some by cutting thread, some don't and just measure edge loss by force of cut on the abrasive material.

Nothing correlates, no conclusions can be drawn about validity of the tests or the results. We all just fit stuff to our expectations, like FFD2 has more wear resistance than X steel, or 1095 holds sharpness longer than M2. Gonna need some peer evaluation on those, and everything else.

I really like reading about all the testing everyone does, in the industry and by end users. But the more I read, the less I see in the way of verifying test results by independent parties.

I agree with your sentiment and much you say, but I think Phil Wilson and Wayne Goddard do pretty much repeat each other's tests. And when they have tested the same blades they got results that were close. And a CATRA test is repeated via multiple trials (if you believe the testers weren't rigging the test).

I think there is a lot of good information available. Tons of anecdotal information from users, great empirical knowledge by knifemakers, some really knowledgeable and professional steel experts, occasional results from experienced hand testers, some CATRA results, and once even some Professors who are real experts at testing. There is a lot of industrial and academic info available off-site, also.

When conclusions are posted contrary to the preponderance of the above, you gotta wonder.
 
Now, where are you, we, or they wrong?
I can guess, mainly due to inconsistent sharpening. When you start with 100% difference in initial sharpness, what good is that test for any meaningful conclusion about edge stability or whatever, in terms of comparison.

Push cutting through the rope is valid for certain aspects, but hardly suitable for others.

I agree with Broos, the argument "show me your tests or don't talk" is pretty lame. Following that logic only movie directors can criticize movies and poets poems...
 
Back
Top