Edge Retention Testing and Analysis

I’ve always thought that initial sharpness had little to do with the total edge holding ability of a blade. The extreme sharpness that the FFD2 blades are capable has me thinking that it is a factor of both the high hardness and extremely fine grain. I’ve been thinking about how to test this initial sharpness. I love what the CATRA machine does but can’t afford one. When a freshly sharpened blade is put in it and it cuts with no registered pressure we’ll know what ultimate sharp is.
My opinion is that some of the variations in initial sharpness in tests done by other folks were due to whether the wire edge is completely removed, partially removed or fully removed. I define sharp as two straight lines meeting at infinity. When that pesky wire is hanging on to the edge it will throw things off. The wire edge is one of the most common problems that people have in sharpening. I’ve taught my sharpening class for 30 years at knife shows and it has always been the same, most folks there don’t have a clue about the wire edge or how to eliminate it.
 
If so, how do you decide the standard deviation of the noise?

It is measured experimentally obviously, the data points are meaningless without that. I have even done large sets (100+ trials) to confirm that the noise is fact gaussian which you would expect of course.

What is "the correlation matrix"?

The matrix which contains the correlation coefficients for the parameters it is essential to calculate and evaluate this for many reasons, you can not just look at the parameter uncertainties.

What data are you doing the correlation with, and what statistical techniques do you use for the correlation?

Nonlinear least-squares, Levenberg Marquet with switches for robust fitting (scaling by deviations). I keep meaning to switch to median based fitting. I have been looking lately at neural networks as well.

Of course, if you don't want to share your source code, I'll understand.

That isn't it, i'd just like to be there when you open it because your reaction would be priceless. It is actually a series of files, one is a DOS BAT file, the other a GNU scripting file and the other an awk script. There are also two input files to contain the directions/switches for the scripts and raw data. None of it is documented and the variables all look like x1yyz or similar. There is about about 10x more code than necessary as everything which was ever used is in the same file just commented out. It also isn't coherent, because I don't always run the monte carlo part, I only do that last generally once I have stabilized the fits. However the actual implementation of the algorithm is irrelevant, it is just an intersection and you can do that by however you want.

I don't think you ever explained the curve you're doing this on, but I think I figured it out.

The general procedure is just an intersect algorithm which procedes by brute force methods. The axis are just chosen to get the desired cut ratios. Sharpness vs amount of media cut for example was the one I was specifically referencing in the above. But you could use the same method on amount of media vs strokes, the meaning of the ratios would then just be different of course, so just present them accordingly.

The point isn't the particular algorithm it is that the cut ratio is a FUNCTION which is nonlinear, it isn't a constant. Once you accept this fact then you realize that you have to be careful what you say in regards to edge retention because the point at which you sharpen (how blunt the knife will get) will greatly influence the cut ratio. Then on top of this it will be effected by geometry, grit, type of cutting, etc. .

If you really want to argue an unbiased comparison then you would define the exact conditions for the conclusions to be valid or else, by definition, the results are indeed biased as I noted previously.

Is it the average x1 and x2?

You can use a linear interpolation, so it would be

y=y1+(y2-y1)/(x2-x1)*(x-x1)

This will give an average [(y2+y1)/2] when x is directly inbetween x1 and x2.

It's my observation that different steels have different ultimate sharpness limits.

Yes, but this isn't a huge difference (when proper methods are used), it is usually at the limit of the ability to resolve unless you are at very fine angles, <10 per side. If you are interested I can send you the ANOVA data I referenced which was produced by someone by a professional sharpener and the results verified under magnification as well as direct measurement for a wide variety of steels. He has also noted the influence of abrasive media and method and how using the same sharpening for all doesn't work because it can bias the results and yes even to the extent of edge retention significantly. In fact you would expect an India abrasive to bias the results in favor of the friction forged D2 over high vanadium steels for obvious reasons that the vanadium carbide is not well cut by aluminum oxide.

The hitch is that nobody knows the best possible performance of either steel A or steel B.

This is an exaggeration which is a bit pointless. The same is true for anything in an absolute sense. Ask someone who is well respected for the heat treatment/geometry and then use that and thus have a defined comparison. For example Phil Wilson has been using S90V for years so it would be very useful to compare against his blades as reference points vs some unknown sample especially if you use the geometry and edge finish that he has found optimal for the steel.

If you are interested in sharpness and the other issue I noted which is the influence of angle on edge retention :

http://www.cutleryscience.com/articles/sharpness_review.html

http://www.cutleryscience.com/articles/edge_stability_review.html

I have been planning to write the edge retention part of that for over a year detailing the various methods used and where the interpretation has to be very carefully performed and where it is misused and gives biased information and where the methods are in fact useless as they produce absurd results as I gave two examples of previously. This conclusion is not knew, in Germany it was known that you can NOT use CATRA results to infer human edge retention. It is again just physics and math not opinion. This is in fact a conclusion over 50 years old, see the work of Klemm and Kligelhoeffer.

What only astonishes me is that blades with same geometry (plus you added roughly same REST sharpness value), have so different "amount of material cut" on their first stroke, where edge retention shouldn't (probably) show yet.

They don't, like I said, the first point is actually after 20 cuts so edge retention is obviously a strong factor. It is also very possible to have a high push cutting sharpness and a low slicing aggression. I showed this years ago by showing the result of different grits/stropping on blades.

-Cliff
 
Originally Posted by Broos
Quote:
I still think to run tests using anecdotal evidence (hints) to establish each steel's optimum geometry, edge finish, and sharpening procedure is inherently
flawed, and would yield results you could not defend.

I think I agree. And I suspect that there is a substantial amount of disagreement over these parameters among the BladeForums participants. I'm toying with
the idea of opening up a thread where I ask people to list what they consider to be optimum edge geometry for various steels, just to see how much agreement
there is. But I'm a bit afraid of that Pandora's box.
I think that could be a real good thread, and would show a lot of disagreement in what is optimal. It is my opinion that optimal edges, angles and the finish/grit is more dependent on the work being done, what is being cut and how it is cut,more so than steel type. As most cutting done with a knife is a combo of slicing and push/press cutting and the ratio of each is different for everyone. So, opinions on what is optimal are very different, and would be very interesting.
 
Cliff, Klemm, or Kligelhoeffer,

Is it possible for you to concisely define edge retention, and then inform us of your methodology used to measure it?

Phil W. or Wayne G. or others with much experience, do you vary the blade geometry, edge angle and blade finish on the same knife depending on the steel you are using?

If yes, how have you gained this understanding on optimal geometries/finishes?
 
In more detail on the above references, Knapp has papers in 1928 on this issue (yes that far back), the subject gets into great detail by the 50's and includes the influence of geometry, carbides, grain, and test conditions on edge retention. All of the issues I noted here are again not theories or opinion they are facts which I have seen experimentally and were verfied over 50 years ago in Germany.

It would have been useful for me to have known about this ten years ago, would have saved me some time rather than reinventing the exact same process/conclusions. The basic point remains though, you can't use CATRA or similar machines to infer human performance, you can't ignore test conditions and you can't use linear inferences on nonlinear models. All of these lead to biased results and misinformation. These again are math definitions of the words.

As I noted I do intend to write this up in the third article, but I also intended to do that a year ago. I have other things first, I still don't even have a home page at cutleryscience.

-Cliff
 
Cliff,

Thanks for your response. I appreciate it, and realize it's probably taking time you could be using doing other fun things.

I do have two favors to ask of you, if it wouldn't be too much trouble.

First, could you separate responses about your data analysis from your comments about other sharpness issues? I'm trying to figure out your analysis, and it would help me if those posts were separate.

Second, if I do figure something out, could you tell me that I have it right? That way I'm sure I understand, rather than wondering if you either missed what I said or didn't understand what I said.

Thanks in advance for helping me this way.

It is measured experimentally obviously, the data points are meaningless without that. I have even done large sets (100+ trials) to confirm that the noise is fact gaussian which you would expect of course.

Since this comment is about noise, am I right to assume that your monte carlo method is to do the following:

1. Fit the sharpness (cut per stroke) vs total media cut data to your cutting model: C(x)=Ci/(1+ax^b). This gives you three parameters Ci, a, b.

2. Calculate the residuals, or the difference between the measured data and the modeled data in the y (C) direction. The standard deviation of the residuals is the standard deviation used later in the monte carlo simulation.

3. Repeat steps 1 and 2 for both blades being compared.

(I have no evidence that you've actually done the next two steps, but I suspect it, given some of your comments. If I'm wrong, please enlighten me).

4. Divide the two curves at a variety of sharpnesses to get the "cut ratio" as a function of sharpness.

5. Calculate the model parameters for the cut ratio function by fitting data obtained in step 4 to some model functional form. I haven't got a clue about the functional form you use for cut ratio. Could you enlighten me?

6. Using the model parameters and the standard deviations obtained in steps 1 and 2, generate some random simulated C and x data (step 1) by calculating C from the model for a variety of x points, then add normally distributed noise with a standard deviation calculated from step 2). Then use this random data to calculate individual points for cut ratio, following your "raw data" procedure.

7. Compare the monte carlo results for cut ratio (from step 6) to the empirically fit cut ratio function (from step 5) to see how well the monte carlo data fits the model. Using residual analysis, you can then estimate the quality of the fit.

Please confirm that this is your analysis technique, or clarify where I've made mistakes, or throw my whole process out and give me a step by step description of your process in a form similar to my list. This would give me enough information to write my own code to do the work.

Thanks,

Carl
 
[Carl asked: What is the correlation matrix?]

The matrix which contains the correlation coefficients for the parameters it is essential to calculate and evaluate this for many reasons, you can not just look at the parameter uncertainties.

This is a generic answer to a specific question. What data values are being correlated? You should be able to give me an answer of the form "a is correlated with b". I honestly don't know where you're using correlation techniques in your process.

Nonlinear least-squares, Levenberg Marquet with switches for robust fitting (scaling by deviations). I keep meaning to switch to median based fitting. I have been looking lately at neural networks as well.
Thanks for sharing with me your correlation techniques. If I knew what data to apply them to, I'd be home free.

That isn't it, i'd just like to be there when you open it because your reaction would be priceless. It is actually a series of files, one is a DOS BAT file, the other a GNU scripting file and the other an awk script. There are also two input files to contain the directions/switches for the scripts and raw data. None of it is documented and the variables all look like x1yyz or similar. There is about about 10x more code than necessary as everything which was ever used is in the same file just commented out. It also isn't coherent, because I don't always run the monte carlo part, I only do that last generally once I have stabilized the fits. However the actual implementation of the algorithm is irrelevant, it is just an intersection and you can do that by however you want.
As soon as I get enough information, I'll implement the algorithm.

You can use a linear interpolation, so it would be

y=y1+(y2-y1)/(x2-x1)*(x-x1)

This will give an average [(y2+y1)/2] when x is directly inbetween x1 and x2.

According to my definitions, y1 and y2 were used for two different blades. So I think the equation should be

y1(x') = y1(x1)+(y1(x2)-y1(x1))/(x2-x1)*(x'-x1)

Did I get the right definition of the process for determining x1 and x2?


Yes, but this isn't a huge difference (when proper methods are used), it is usually at the limit of the ability to resolve unless you are at very fine angles, <10 per side. If you are interested I can send you the ANOVA data I referenced which was produced by someone by a professional sharpener and the results verified under magnification as well as direct measurement for a wide variety of steels. He has also noted the influence of abrasive media and method and how using the same sharpening for all doesn't work because it can bias the results and yes even to the extent of edge retention significantly. In fact you would expect an India abrasive to bias the results in favor of the friction forged D2 over high vanadium steels for obvious reasons that the vanadium carbide is not well cut by aluminum oxide.

I'd love to see the data. I'm always interested in evaluating data.

Thanks,

Carl
-------------------------------
It is not necessary to believe things in order to reason about them
It is not necessary to understand things in order to argue about them.
- P.A. Caron de Beaumarchais, French Author, 1732-1799
 
[In Response to Ravaillac: What only astonishes me is that blades with same geometry (plus you added roughly same REST sharpness value), have so different "amount of material cut" on their first stroke, where edge retention shouldn't (probably) show yet.]

They don't, like I said, the first point is actually after 20 cuts so edge retention is obviously a strong factor. It is also very possible to have a high push cutting sharpness and a low slicing aggression. I showed this years ago by showing the result of different grits/stropping on blades.

-Cliff

Cliff,

I assume you missed my plot of the media cut during the first 20 strokes of the test. It's not true that our first data is after 20 strokes.

To demonstrate that the 20 stroke measurement is consistent, I've included a plot of the total rope cut as a function of stroke number for the first 20 strokes. As you can see, the cutting performance is very linear for the first 20 strokes, and you can see that the proposed "much higher initial performance for s90V" doesn't exist.
s90V-Initial-Trend.jpg

This shows that the cut per stroke is consistent over the first 20 strokes, and there is no initial non-linear blunting.

Also, you need to understand that our media is not CATRA media, and 20 strokes on hemp rope is equivalent to far fewer strokes on CATRA media.

Carl
 
In more detail on the above references, Knapp has papers in 1928 on this issue (yes that far back), the subject gets into great detail by the 50's and includes the influence of geometry, carbides, grain, and test conditions on edge retention. All of the issues I noted here are again not theories or opinion they are facts which I have seen experimentally and were verfied over 50 years ago in Germany.
-Cliff

Could you give us the two or three best specific references?

Thanks,

Carl
 
the subject gets into great detail by the 50's and includes the influence of geometry, carbides, grain, and test conditions on edge retention. All of the issues I noted here are again not theories or opinion they are facts which I have seen experimentally and were verfied over 50 years ago in Germany.-Cliff

As we review the facts that will be forwarded to us by Cliff, lets keep in mind the process by which these types of relationships are validated:

0) Postulate the theoretical relationship
1) Develop a test criteria
2) Perform the test w/results that confirm theory
3) Confirmation of test criteria by peers
4) Perform additional tests repeating the results

Didn't cold fusion look pretty good until the last step?

I too am interested in seeing this information. Feel free to correct my method outline above.
 
1. Fit the sharpness (cut per stroke) vs total media cut

Yes.

Calculate the residuals, or the difference between the measured data and the modeled data in the y (C) direction. The standard deviation of the residuals is the standard deviation used later in the monte carlo simulation.

No that would be a horrible idea because you are now constraining the simulation to the model. The simulation has to be just on the data alone, the residuals could be meaningless if the model doesn't represent the physical behavior. The uncertainty in the data is as I said measured directly.


4. Divide the two curves at a variety of sharpnesses to get the "cut ratio" as a function of sharpness.

I used to look at the model intersects (not the ratio of the functions) but now I just use the data directly with no modeling for the cut ratios. I still do the fitting for the other reasons previously noted.

5. Calculate the model parameters for the cut ratio function by fitting data obtained in step 4 to some model functional form.

I don't model this generally because it is too noisy. I just quote a range generally and show the full plot.

Using the model parameters and the standard deviations obtained in steps 1 and 2, generate some random simulated C and x data (step 1) by calculating C from the model for a variety of x points, then add normally distributed noise with a standard deviation calculated from step 2). Then use this random data to calculate individual points for cut ratio, following your "raw data" procedure.

No model parameters, the raw data is used to generate a large number of pseudo-data sets which the above intersect algorithm is implemented to find the cut ratio.

7. Compare the monte carlo results for cut ratio (from step 6) to the empirically fit cut ratio function (from step 5) to see how well the monte carlo data fits the model. Using residual analysis, you can then estimate the quality of the fit.

No, again no model.

What data values are being correlated?

You are misunderstanding the term correlation matrix. It has the correlation of the model parameters (initial sharpness to exponent, exponent to multiplicative factor, etc.) .

As soon as I get enough information, I'll implement the algorithm.

You have two curves y, y'.

1)Find the x,x' values for which y(x)=y'(x'). This ratio x'/x is the cut ratio which show how much more material one blade can cut than another to reach a given level of degredation.

2)Find the y/y' values at a given x value. This is the sharpness ratio which shows the relative level of sharpness at a given amount of material cut.

Note that both of these measure edge retention but they give very different numbers. Which one is more important will depend on the user, personally the cut ratio makes sense to me because I cut until knives are a certain level of dulling whereas the second is that you cut a certain amount of material (regardless of the edge retention) and then resharpen.

y1 and y2 were used for two different blades.

No I meant it to be the two bounding points on the same curve. In retrospect that is pretty poor notation and your interpertation would be the obvious one.

Also, you need to understand that our media is not CATRA media, and 20 strokes on hemp rope is equivalent to far fewer strokes on CATRA media.

That doesn't matter to the model. Like I said, it will even model the effects of dental scrapers. I took published independent data and showed that awhile back.

You have to be careful when you talk about blunting because what you are measuring is cutting ability which is the total force (or material cut) and thus influenced by not only the force due to sharpness but the force due to wedging.

Since the former is very small compared to the latter it takes a large difference to be seen. It could be highly nonlinear and still not be seen. Why is this data not in the excel sheet you posted earlier? I would like to add it to the curve I had in the above and verify the initial behavior as either way there is a serious issue one way or the other.

Could you give us the two or three best specific references?

Yes, as soon as I complete the literature survey.

-Cliff
 
Cliff, Klemm, or Kligelhoeffer,

Is it possible for you to concisely define edge retention, and then inform us of your methodology used to measure it?

Phil W. or Wayne G. or others with much experience, do you vary the blade geometry, edge angle and blade finish on the same knife depending on the steel you are using?

If yes, how have you gained this understanding on optimal geometries/finishes?

Early on our rope cutting proved that blade thickness at the edge had an effect on the total amount of rope we could cut. By 1977 my 4-inch hunting knife pattern had reached the basic shape it is today and became the official shape of our test knives. Blades were 1/8-inch thick, flat ground to an edge that had a flat of .020. The primary sharpening angle of fifteen-degrees measured from the side of the blade is established with a fine abrasive belt. The edge is then finished with the Norton Fine India and the wire edge removed with the same stone.

My tests were aimed at finding the best type of grind and edge for a hunting type knife.
 
Thanks, Cliff. I appreciate your responding directly to my questions.

Now I'm back to my original question.

If you're calculating the sharpness ratios directly from the data, and not fitting a curve to either the (cut per stroke) vs (total media cut) data or the (calculated cut ratio) vs (total media cut) functions, then ...

How do you calculate the standard deviation to be used in the Monte Carlo simulations?

If you take the standard deviation of the (cut per stroke) data, you haven't accounted for the functional change, and the data wouldn't be valid.

The same is true for (total media cut) and (calculated cut ratio).

In order to measure noise in a signal, you have to have some way to measure the noise-free signal. Then the noise is the deviation from the noise-free signal.

So ---

What is the noise-free signal you are eliminating from the measured data in order to determine the noise?


Thanks,

Carl
 
You are misunderstanding the term correlation matrix. It has the correlation of the model parameters (initial sharpness to exponent, exponent to multiplicative factor, etc.) .

In order to do a correlation between data, you must have the same data points measured under different conditions.

So if I wanted to do correlation between (initial sharpness) and (b), I'd need to have lots of values of (initial sharpness) and lots of values of (b). Then I could see how these varied with respect to one another.

Is your correlation matrix defined by doing multiple edge retention tests on a single blade, and fitting the model to each of the tests, and putting an entry in a table for (initial sharpness), (a), and (b) for each test?

Or is your correlation matrix defined by all of the edge retention tests you've ever done?

Or can you explain some other way that you get the multiple (initial sharpness), (a), and (b) values you use in your correlation studies?

Thanks,

Carl
 
Yes.
No model parameters, the raw data is used to generate a large number of pseudo-data sets which the above intersect algorithm is implemented to find the cut ratio.

What is the technique by which the raw data is used to generate pseudo-data sets?

Carl
 
Wayne,

Have you found any reasons to vary your geometry depending on the steel you are using?

Thank you
 
If you're calculating the sharpness ratios directly from the data, and not fitting a curve to either the (cut per stroke) vs (total media cut) data or the (calculated cut ratio) vs (total media cut) functions, then ...

Yes I use the raw data now to calculate the ratios, I use the model as well but for different reasons. In time I want to be able to model the data based on the physical properties of the knife/steel, i.e., draw the curve for a given geometry, with a given finish with a given edge stability and wear resistance. You need to model the data to do that obviously.

How do you calculate the standard deviation to be used in the Monte Carlo simulations?

Measure the data points many times and calculate it directly as you would do to determine the noise in any measurement to produce a mean and standard deviation in the mean. I actually prefer to use the median and its standard error (from the scaled IQR) at most times. I usually run 3-5 sets, but as noted have done up to 100 to confirm the noise is in fact gaussian.

In order to measure noise in a signal, you have to have some way to measure the noise-free signal.

There is no such thing as a noise free signal. All measurement is inherently subject to deviation, that is even a fundamental physical law. It is impossible to make an observation without disturbing the system as the act of measurement itself is an influence. Of course with any real system there are a host of random and systematic deviations.

So if I wanted to do correlation between (initial sharpness) and (b), I'd need to have lots of values of (initial sharpness) and lots of values of (b). Then I could see how these varied with respect to one another.

This is indeed true.

Is your correlation matrix defined by doing multiple edge retention tests on a single blade, and fitting the model to each of the tests, and putting an entry in a table for (initial sharpness), (a), and (b) for each test?

Again, the correlation matrix is not mine, this is a know math term for fitting, just do a search on the specific algorithm I noted I use for the nonlinear fitting and it will define the matrix exactly. It correlates the parameters to EACH OTHER. It isn't a correlation coefficient between the independent/dependent variables.

What is the technique by which the raw data is used to generate pseudo-data sets?

Random gaussian generator with an appropiate mean and standard deviation. I think specifically I used an algorithm adapted from Numerical Recipies for the generator but that of course doesn't matter.

-Cliff
 
Thanks for the information.

OK, so let me summarize my new understanding.

You run multiple tests on a blade (ranging from 3-5, to up to 100). You collect (cut per stroke) vs. (stroke number data).

For a given blade, you can calculate the average (cut per stroke) for a given (stroke number), because you have 3-5 data points at each stroke. You can also calculate the standard deviation. For ease of discussion, I'll call the number of strokes N, the mean (cut per stroke) after N strokes C(N), and the standard deviation of (cut per stroke) after n strokes S(N).

The simulated data set would then consist of data sampled from a normal (gaussian) distribution with a mean of C(N) and a standard deviation S(N).

Is this statement about the mote carlo simulated data set correct? If not, could you tell me which statements are wrong, and preferably correct them for me?

Thanks,

Carl
 
Yes, the only change I would make is that the dependent variable is either a measure of sharpness or cutting ability and the independent variable almost always the amount of media cut. Most times for a given run I will actually record multiple dependent variables. For example on a hemp cut run I will record the force used to make a cut, then directly measure the push cutting and slicing sharpness, often by several different measures each.

-Cliff
 
Yes, the only change I would make is that the dependent variable is either a measure of sharpness or cutting ability and the independent variable almost always the amount of media cut. Most times for a given run I will actually record multiple dependent variables. For example on a hemp cut run I will record the force used to make a cut, then directly measure the push cutting and slicing sharpness, often by several different measures each.

-Cliff

Thanks, Cliff.

If you use the amount of media cut as the independent variable, how can you simulate data?

If you're using rope cutting tests like Wayne Goddard's, where the amount cut per stroke is a constant and the force used to cut the rope is variable, then I can see that you could use the amount of media cut as an independent variable.

But if you're using a machine similar to a CATRA ERT machine, there is a nonlinear relationship between the media cut and the number of strokes, so you can't ensure that multiple runs will have the same values of total media cut. Therefore, for CATRA ERT-type tests, it seems to me this technique will only work with number of strokes as the independent variable. Am I missing something?

Thanks,

Carl
 
Back
Top