Edge Retention Testing and Analysis

Again, the correlation matrix is not mine, this is a know math term for fitting, just do a search on the specific algorithm I noted I use for the nonlinear fitting and it will define the matrix exactly. It correlates the parameters to EACH OTHER. It isn't a correlation coefficient between the independent/dependent variables.



I realize that "correlation matrix" is a known math term (actually, it's generally considered more of a statistics term, in my experience), but it only has meaning in the sense of a particular data fitting.

Could you help me understand the data fitting you are doing?

Are you fitting multiple runs for a single blade in order to obtain Ci, a, and b values?

Are you fitting multiple runs for multiple blades in order to obtain information about how Ci, a, and b jvary with one another, depending on blade material and blade geometry?

Are you doing both of the above, and/or something else entirely?

Can you help me understand what data you are applying the known fitting routines to, and what model you're hoping to obtain by doing the fitting?

I really appreciate your patience with me, if I'm not picking things up as fast as you'd like.

Thanks,

Carl
 
I gotta go play at BLADE Show. Be back in four days or so with more of my two-bits worth.

Maybe Carl will have Cliff figured out by time I get back. (smile)

Wayne
 
If you use the amount of media cut as the independent variable, how can you simulate data?

The independent variable is usually assumed to be fixed, i.e., no uncertainty, because otherwise as noted the problem becomes nonlinear (regardless of the model). In reality this is not absolutely true, there are deviations always in this variable as well but assuming they are small enough (a magnitude less than the dependent uncertainties) you can ignore them as they will not significantly effect the outcome.

Here though I would not state uniformly this is the case in studies of blunting. For example, lets assume that I am doing 30 cm cuts on cardboard. Now no matter what I do the first cut is going to never be 30 cm exactly because there will be some drift off a straight line and the exact nature of the cardboard will never be identical so it is never "30 cm of an exact cardboard composition".

Now this means there is also a deviation in the independent variable and to be perfectly robust this should be simulated as well. The ideal way to do this would be to measure it directly (with the variance in the dependent variable really reduced) and thus generate the data set with that variation as well. Ideally you could really measure it by doing a variance analysis on the composition, hardness, level of abrasion, etc. .

I was actually planning to do this at a later date because quite frankly based on what I have been seeing as of late I think there are fairly major deviations I am not taking into account because some of the spreads I have been seeing I think are way beyond the measurement uncertainty in the sharpness/cutting ability. Now you can just model this (x-axis uncertainty) by increasing the y-uncertainty, but this is problematic for a few reasons and ideally you model the x-uncertainties directly if at all possible.

-Cliff
 
Ok, I understand about the simulation of the data.

Now, can you help me understand about what data you are fitting to what model when you produce the correlation matrix, using Levenberg Marquet nonlinear least squares fitting with switches to allow scaling by deviation?

And did you mean the Levenberg-Marquardt method for choosing a path of descent on the objective function surface, as described in Chapter 14 of Numerical Recipes, or Section 15.6 of the second edition of Numerical Recipes"? A Wikipedia reference to Levenberg Marquardt is also found here.

Thanks in advance,

Carl
 
All this is very interesting to us non engineer, non physicist types. Yeah, right. :) Based on what I am reading here no test is sufficiently robust to produce a significant degree of certainty when it comes to edge retention superiority. The variables produce too much data noise. Sure, you can probably pick out the better steel when comparing two widely different steel alloys, but once you get into the realm of high tech modern alloys the differences are so slight that it really becomes a matter of personal preference and intended task, not a clear, materials superiority across a broad spectrum of uses.

It's like high fidelity stereo. Once you get beyond a few thousand dollar in equipment costs, the gain increments get smaller and smaller while the dollar outlay rises exponentially.

This is why I like competitions. Instead of doing tests on knives hold competitions. Set the rules and let the best knife win. Eventually some steel, heat treat, edge geometry and blade design will lead the pack and become the benchmark others try to beat. When you have a winner, analyze it and learn from it. Inovations and improvements come in leaps and small increments and competition drives this better than anything else.

For example, like the ABS rope cutting competition, set up a small blade cutting contest that includes cardboard cutting, rope cutting, and so on. It would only be a few years before we would start to see what really stands out.
 
All this is very interesting to us non engineer, non physicist types. Yeah, right. :)

I hope that when us PhD geeks have finished arm wrestling through this stuff, there'll be a simple explanation for all the normal people on bladeforums. But getting there will be interesting for only a small handful (maybe 2) of us.

Based on what I am reading here no test is sufficiently robust to produce a significant degree of certainty when it comes to edge retention superiority. The variables produce too much data noise.

I don't believe the noise is too high. Cliff does. I'm just trying to understand how Cliff gets to his conclusions, so I can see if I agree with them or not.

This is why I like competitions. Instead of doing tests on knives hold competitions. Set the rules and let the best knife win. Eventually some steel, heat treat, edge geometry and blade design will lead the pack and become the benchmark others try to beat. When you have a winner, analyze it and learn from it. Inovations and improvements come in leaps and small increments and competition drives this better than anything else.

I'm all for competitions. But sometimes it's hard to separate the effect of the person holding the blade from the blade itself. That's why I'd like to see tests as well as competitions.

For example, like the ABS rope cutting competition, set up a small blade cutting contest that includes cardboard cutting, rope cutting, and so on. It would only be a few years before we would start to see what really stands out.

It would be fun, wouldn't it.


Carl
 
I'm all for competitions. But sometimes it's hard to separate the effect of the person holding the blade from the blade itself. That's why I'd like to see tests as well as competitions.
Carl

I have competed in two rope cutting contests, as well as myriad tameshigiri(japanese mat cutting contests).

I can tell you categorically in that medium as it currently exists, that it is about 25% hardware(knife/sword), and 75% software(knife/sword wielder)

Great cutters may not always be on, but when they are, they could be cutting with many different blades, and still get superlative results(6+ 1" rope bundles severed with one stroke, with a 10" knife):eek:

Best Regards,

STeven Garsson
 
Now, can you help me understand about what data you are fitting to what model when you produce the correlation matrix, using Levenberg Marquet nonlinear least squares fitting with switches to allow scaling by deviation?

The data is sharpness/cutting ability vs amount of media cut usually. I showed an example curve fitted as well as the cut ratio calculations in the other thread.

I have several measures of sharpeness and cutting ability, the curves I have described in the webpage, they are essentially a power law based on the physics of the mechanics of blunting.

As noted, these are not emperical based models like a general power law comparison. Ok they are semi-emperical as I am fitting parameters.

A Wikipedia reference to Levenberg Marquardt is also found here.

That is it.

Based on what I am reading here no test is sufficiently robust to produce a significant degree of certainty when it comes to edge retention superiority.

No, that was never said, in fact I showed specifically how you would determine the actual advantage for the friction forged D2 blades over S90V.

I don't believe the noise is too high. Cliff does.

It is not about belief, it is about fact. When you manipulated the data the noise exploded simply due to the rules of error propogation and introduced an additional nonlinearity. I showed how to model the data properly without doing any of that and the resulting noise was relatively very low.

-Cliff
 
It's all in how you set up the contest. Yes, you can create a competition where the human factor is significant if not actually dominant, but you can also make rules that limit it. It might be as simple as handing the knife to someone who fits it into machine that is designed to run material past the edge. The knife that cuts the most material wins.

On the other hand, introducing the human factor is exactly what makes competitions fun. Regardless of the rule scenario, gamesmanship will cause continuous innovation and the knives that win early on will not be the knives that win later rounds. I believe this would drive both material improvements but design improvements as well.

As for noise. Even Cliff's testing methodology leaves me with considerable doubt. Not so much with his results on specific knives and steels, but with his data's predictive capacity. The only thing that works for me is real life experience with a specific knife. Either it performs for me or it does not. I look forward to trying a FF D2 knife. I hope it is a step forward.
 
It might be as simple as handing the knife to someone who fits it into machine that is designed to run material past the edge. The knife that cuts the most material wins.

Congradulations, you just found the knife that is best for a machine to use. Why do you care about that, are you in the manufacturing industry and need a machine to cut a lot of leather? This is of NO interest to a leatherworker who needs a personal knife to cut a lot of leather. You can easily design a competion WITH PEOPLE where the knives that are the best will win not the people. It is just math.

-Cliff
 
Steelhed,
That is a good idea - some kind of cutting contest - limit knife size, weight, and/or dimensions, and design the test so it is cutting only. Superior edge retention of the knife will be critical to success. Maybe the first part pure sharpness tests, second phase intense cutting, third stage an endurance cutting test. Let the competitions go for a few years, and you will see the different types of steels (certain makers?) used by the competitors get more and more narrowed. Maybe.

The winner of this competition gets the right to define "edge retention" for a year!

Then we would use numerical methods to best fit the data found to an equation. ;)
 
It is not about belief, it is about fact. When you manipulated the data the noise exploded simply due to the rules of error propogation and introduced an additional nonlinearity. I showed how to model the data properly without doing any of that and the resulting noise was relatively very low.

-Cliff

You have never shown the fact, only the belief. You said the noise was too high. I've asked for your measurements of the noise. You've never given them. Until I see at least one number showing what the noise is, I can't attribute your statements to anything but opinion.

Show me the noise data, and show me how the noise data is too high (or not too high, depending on the data set), and then I'll be able to credit your assertions as fact-based.

Carl
 
The data is sharpness/cutting ability vs amount of media cut usually. I showed an example curve fitted as well as the cut ratio calculations in the other thread.
-Cliff

OK, so please let me know if I understand this correctly.

1. You perform edge retention tests to determine the sharpness as a function of the amount of media cut. You do this multiple times for a given blade.

2. Using the data from step 1, you calculate the mean and standard deviation for each x-axis value.

3. With the Levenberg-Marquardt method, you use a nonlinear curve-fitting procedure to fit your model (C(x) = Ci/(1+ax^b) to the data.

4. The fitting procedure gives you values for Ci, a, and b.

5. The covariance matrix (which I believe is the same thing you called the correlation matrix) contains data on the goodness of the fit.

6. If the goodness of the fit is adequate, you have a reasonable measurement of the cutting performance. If the goodness of the fit is inadequate, you don't have a reasonable measurement, and you conclude that the data isn't yet good enough to report.

Am I close?

Thanks,

Carl
 
Show me the noise data, and show me how the noise data is too high (or not too high, depending on the data set), and then I'll be able to credit your assertions as fact-based.

I showed this clearly by pointing out that the actual raw data fit the model precisely but because of the manipulations on it and the additional nonlinearity it failed to fit the same mechanical model and was undefined. This should indicate a pretty severe problem with the calculations performed. This is just math, not opinion. There is however no statistical proof of any of the statements you made and I showed clearly how some of the things you said like using an average on data which are not modeled by f(x)=c is inappropiate.

Again to be clear, even though I have restated this several times and it surprises me that it is still not understood. The ACTUAL MEASURED DATA does show a perfectly significant improvement (math not opinion). However, through the differences (horrible, never done), the ratio (never done), the noise gets so large that the data just scatters and there is no statistical proof of the claims made just vague statements.

The actual burden of proof is on you to show that the conclusions are supported by the data (math). It is also useless to use a push cutting test when the method of blunting is slicing, and in any case the whole thing is useless outside of the manufacturing industry for reasons noted in the above. It boggles me that this is ignored when it has been pointed out here for years, manufacturers even publically stated it leads to defective results and it has been known in germany for over 50 years.

-Cliff
 
The actual burden of proof is on you to show that the conclusions are supported by the data (math).
-Cliff

OK -- I can take the burden of proof. I'll be happy to do so. But I'll be doing it with different methods than you use, since I don't have multiple data sets, so I'll use a pooled standard deviation.

It is also useless to use a push cutting test when the method of blunting is slicing,

Why? Push cutting tests reflect the force required to get a cut started.


and in any case the whole thing is useless outside of the manufacturing industry for reasons noted in the above. It boggles me that this is ignored when it has been pointed out here for years, manufacturers even publically stated it leads to defective results and it has been known in germany for over 50 years.

OK -- you don't like mechanical sharpness testing. But you also don't like human sharpness testing, due to bias. Is there any kind of sharpness testing you like?

Carl
 
I showed this clearly by pointing out that the actual raw data fit the model precisely but because of the manipulations on it and the additional nonlinearity it failed to fit the same mechanical model and was undefined. -Cliff

Which manipulations are you talking about? Every plot in the paper was a plot of raw data. REST sharpness, total media cut, media cut per stroke. All are raw data. So what are the "manipulations" that led to the "additional nonlinearity"?

Carl
 
IThe only thing that works for me is real life experience with a specific knife. Either it performs for me or it does not. I look forward to trying a FF D2 knife. I hope it is a step forward.

Mike,

I'm likely going to be traveling through Anchorage the last week of August, enroute to the Alaska Expedition Company's lodge, where we'll be talking FFD2 knives. If you'd like to see one, we might be able to meet at the airport. Perhaps I might even be able to let you try one for a week, while I'm out at the lodge.

Let me know if this sounds interesting.

Carl
 
I have several measures of sharpeness and cutting ability, the curves I have described in the webpage, they are essentially a power law based on the physics of the mechanics of blunting.
-Cliff

Can you please show/describe in physics or engineering principles the "physics of the mechanics of blunting". I have a hard time seeing this accounted for in the data/analysis that you have provided or directed us to on your web site.

Thanks,

TN
 
Push cutting tests reflect the force required to get a cut started.

Push cutting sharpness is different from slicing sharpness. You can be very high in one and low in another, thus if you are testing slicing ability you would look at slicing sharpness.

OK -- you don't like mechanical sharpness testing. But you also don't like human sharpness testing, due to bias.

Like has no part in this discussion and this isn't a personal preference. I like the taste of mangos, however SVD is a better choice for systems of linear equations because of being more robust. This is math, not opinion.

I have said the mechanical testing has a systematic bias, this is just math. Human testing is fine, you just need to account for the personal preference bias with methods as I have noted. THis again is just math.

Which manipulations are you talking about?

The calculations on the raw data to produce the plot. Manipulation is likely a poor choice because you seem to think it implies improper behavior. It can, but that wasn't how I meant it there. Transformation would have been a better term. As a simple example, consider this transformation :

f'(x_n)=f(x_n+1)-f(x_n)

THis is a well known transformation and it will EXPLODE noise and would never be done on experimental data. You would model the base data and transform the coefficients if required. This is all really basic modeling procedure.

tnelson, I described this in the webpage where I developed the equations from first principles. I have also shown how they model the real data that I have collected, that others have collected and even work on other applications such as dental scrapers.

-Cliff
 
Back
Top