Edge Retention Testing and Analysis

... but I'm not paid either.

So if there is an increase in sales of the FF knives which leads to a demand for the material do you or do you not make money from it?

No, primarily because I don't agree with the disclaimer.

Repeatability is the main concern and correlation of the data to the actual conclusion i.e., this would be useful for people. That would be the reason for the disclaimer.

I don't disagree with you, but several knife makers do.

Physics agrees with me, having numbers on their side is meaningless. It just means more of them are wrong.

But I haven't been able to understand how you estimated the uncertainty for the cut ratio. That's why I keep asking the question.

It is determined a hundred times or so in the montecarlo fits and an average and standard calculated.

Would you show me how to transform the functions so the advantage of FFD2 is a million percent?

This is highschool algebra, let f1 and f2 be two functions. Now if you are taking a difference then scaling the function will scale the difference :

Diff = C*f1-C*f2=C (f1-f2)

A ratio is no different simply consider the transform g(x)=exp(x), that will induce a huge ratio transform. You have to show that the transform that you use is physically meaningful.

tnelson, as I noted :

.... If you assume it is wear based then as the edge thickens, under the same force the pressure is reduced so the rate is reduced, thus the 0.5 law.

This is basic differential equations. As noted, you will see this when changing the angle on a blade as the more you grind the longer it takes to remove a similar amount of material because the contact width increases.

However this is just the ideal behavior. In reality at the same time the edge is deforming and chipping which changes the effective thickness. As well the wear rate changes because the edge changes from wear around the matrix to the large carbides coming out.

Now the physics behind these mechanisms has been presented in detail by several of the authors cited. I started with a simple ideal behavior (harmonic oscillator for example) and found that it was sufficient alone to model the behavior.

-Cliff
 
So if there is an increase in sales of the FF knives which leads to a demand for the material do you or do you not make money from it?

yeah, I probably make about 5 cents per knife. The total views on the bladeforums page where I started defending myself against charges of dishonesty has had about 6,000 views. Suppose every one of those viewers bought a knife. That would net me about $300.00. Considering that my consulting rate is $200.00 per hour, it's safe to assume that I'm not on here for the money.

Physics agrees with me, having numbers on their side is meaningless. It just means more of them are wrong.
So why do you feel a need to argue with me when I agree with you?


A ratio is no different simply consider the transform g(x)=exp(x), that will induce a huge ratio transform. You have to show that the transform that you use is physically meaningful.
Yes, and I maintain that the "transformation" I use is physically meaningful. I do a difference and a ratio in order to convert from total media cut and total number of strokes to cut per stroke, which is the CATRA-defined measure of sharpness. Do you maintain that this is an improper transformation?

If I were calculating ratios of exponentials arbitrarily in order to inflate performance ratios, I'd feel that your concern were warranted. Is there a particular transformation we've done that you feel is unfair or inappropriate?


BTW, I'd still like to know what it is that becomes "undefined" in your analysis of our data. I haven't been able to make anything become undefined.

Thanks,

Carl
 
1. So why do you feel a need to argue with me when I agree with you?

2. I maintain that the "transformation" I use is physically meaningful. I do a difference and a ratio in order to convert from total media cut and total number of strokes to cut per stroke, which is the CATRA-defined measure of sharpness. Do you maintain that this is an improper transformation?

3. If I were calculating ratios of exponentials arbitrarily in order to inflate performance ratios, I'd feel that your concern were warranted. Is there a particular transformation we've done that you feel is unfair or inappropriate?

4. I'd still like to know what it is that becomes "undefined" in your analysis of our data? I haven't been able to make anything become undefined.

Thanks,

Carl

Cliff,

4 simple questions that deserve reasonable and simple answers. You have been dodging and dancing long enough.

Please answer them simply in plain language without the verbal eye-rolling.

I mean, if every one of your incindiary and willfully obtuse claptrap statements(when you make them) was linked back to this thread so your beloved throngs of "Stampies" could see how you run when challenged by those in an intellectual position to do so, well, how would that look?

Best Regards,

STeven Garsson
 
If I wasn't such a cheapskate I'd get these reports to have a look at their analysis - one for cutting thick pcs of wood across the grain, and one for slicing (using a deli slicer as their cutting method).

http://www.springerlink.com/content/x4071114877w327v/

http://www.springerlink.com/content/j5484662t7067624/

Both are considering friction as one of the factors in their models for cutting and slicing.

The wood paper also looks at the increasing wedging force on the side of the blade as the wood is being cut - obviously a factor when cutting thick pcs of wood (but considered a constant in Cliff's model). Obviously if the wedge force is considered a constant in any model, that model will only apply to any cutting where it is not a factor. I also notice that these gentlemen state up front that their model is theoretical.

The slicing paper states a fact that I think everyone realizes (intuitively at the least) when they are cutting, but I do not recall hearing it here yet - that the speed of the slice decreases the force needed for the cut. This is another factor not considered in Cliff's model. I guess this becomes much more apparent when you're using a deli slicer for your test machine. :D
I would assume this speed is constant using the CATRA tester. And will not be with a human tester (though I am not ready to throw out human testing on this basis).
 
From the link I posted earlier in the thread, kel_aa had uploaded both of those papers, and I have them. They do not compare steels, nor do they test edge retention. They factor in friction for the force needed to make a cut, not how it affects edge life. The slicing paper makes note of the way geometry is the determining factor for levels of friction, and also formulates for frictionless cuts.

They really won't help for the specific topic of this thread, though still interesting papers.
 
From the link I posted earlier in the thread, kel_aa had uploaded both of those papers, and I have them. They do not compare steels, nor do they test edge retention. They factor in friction for the force needed to make a cut, not how it affects edge life. The slicing paper makes note of the way geometry is the determining factor for levels of friction, and also formulates for frictionless cuts.

They really won't help for the specific topic of this thread, though still interesting papers.

Hardheart,
Thanks for that - I forgot those, maybe because I got invalid link messages when I tried to take a look. I was relating it to the model for cutting more than edge retention, and since objections to standard testing & data analysis were repeatedly referred back to a model that was presented as fact I thought it was pertinent to the thread.

Can you explain how they model the forces that they attribute the increased cutting ability with increased slicing speed to? I'd be interested in that.

It could apply to edge retention testing when the stopping point for edge retention testing is the force required to cut, right? To object to a test because the slicing speed is not constant is not an objection I would make, but it is probably more significant than many of the objections made here to testing and analysis.

And they do indicate that the wedging force on the side of the blade is a constraint that should be mentioned if a model does not consider it. It is standard practice to state the limits of a theoretical model in its presentation.

I think slicing and cutting is inherently complicated from a theoretical standpoint. I do not believe there is any one theory that fully models this behavior accurately considering all the variables. IMO testing is the best basis to gain understanding of it, and an empirical model using the best fit line is enough. My point from the beginning was that testing is done best by first defining what you want to test for, then keeping other variables constant except for those you want to investigate. This was objected to, also.
 
here's part of the paper, the page is small, but you can resize it a few different ways. If it's still blurry when you do, I'll up larger images of it.
 

Attachments

  • slice1.jpg
    slice1.jpg
    28 KB · Views: 24
  • slicea.jpg
    slicea.jpg
    58 KB · Views: 27
  • slice4.jpg
    slice4.jpg
    25.8 KB · Views: 11
I'm getting a copy of Landes's book. Can you give me a chapter, or a page number, or a page range to make it easier for me to get into the german?

Carl: if you have trouble with German, I might be able to help. You can email me anytime. I am moving in 8 days and my be of the internet for while, but before and after I am happy assist you with any language help you migh need or could use.
 
yeah, I probably make about 5 cents per knife.

Not a huge incentive money wise, there are still other biases which you are obviously aware of. This is why I cite other work when I talk about mine and not just what I did as it is obvious to anyone not completely naive that people will defend what they did. Thus in the above I cited the work of others which directly supports what I have stated.

So why do you feel a need to argue with me when I agree with you?

In that case you had an academic responsibility to note that the test request was absurd and meaningless. When you actually perform it and cite it you are obviously enforcing the opposite. There is a whole host of issues there.

Do you maintain that this is an improper transformation?

Yes because it is not needed. I showed how you can calculate the performance ratio from the RAW data and compare directly. All you did was blow up the noise.

BTW, I'd still like to know what it is that becomes "undefined" in your analysis of our data. I haven't been able to make anything become undefined.

I showed that directly. The model works perfectly fine on the raw data. The same model parameters won't converge on the transformed data because the noise was too large.

-Cliff
 
Not a huge incentive money wise, there are still other biases which you are obviously aware of. This is why I cite other work when I talk about mine and not just what I did as it is obvious to anyone not completely naive that people will defend what they did. Thus in the above I cited the work of others which directly supports what I have stated.
Of course I'm aware of my biases. I already told you about them.
I have a much stronger interest because I think it's a great process, and I want to see processes I've worked on get implemented. I like to think I made a difference. So I'm certainly not disinterested, but I'm not paid either.
But you seem to want to harp on the financial aspects.

In that case you had an academic responsibility to note that the test request was absurd and meaningless. When you actually perform it and cite it you are obviously enforcing the opposite. There is a whole host of issues there.
I challenge you to find anyplace that I either performed or cited the "rod test". If you can't find one, then you owe me an apology. And I expect you to be man enough to deliver it without any weasel words.

I showed that directly. The model works perfectly fine on the raw data. The same model parameters won't converge on the transformed data because the noise was too large.
You did not show that directly. If you did, please quote your message where you show it directly, and I will apologize for my statement at the beginning of this paragraph.

You have never clearly described the "model parameters" that "won't converge". I keep asking about this. I'd welcome a clear, step-by-step description of what you do, with what parameters, and the results of your calculations that "don't converge". I can't make my model fail to converge. I don't know what model you are using that won't converge. If you'd share it with me, then I could duplicate your results, which, as you are aware, is an important part of the scientific process.

I'm a bit frustrated with your unwillingness to participate in the scientific process. You accused me of biased, improper testing. You said my data won't hold up to scientific inquiry. I've published my data. I've shown results of my analysis. I've been clear about the analysis that I've done. I've given descriptions of procedures that others (e.g. Broos) have been able to understand.

You have not shared your algorithms. You've never told me what you've done. You've diverted the conversation. When I finally figure out what you've done, you can't even agree with a straight answer -- there's always a dodge, a statement that there are other things I might not have considered. Then when I ask what those other things are, you don't answer; you change the subject.

I'm willing to test my data against your models to see if it converges or not. But you won't tell me what your models are. So I'm unable to duplicate your work. According to the scientific method, that means it's your work, not mine, that should be in question. I've been as open and forthcoming as I can be. You've been evasive and secretive about your methods.

I'd love to be able to learn from you about your methods, but I can't learn just by studying the results. I need to study the methods. I'd certainly be grateful if you'd be willing to share more details about your analysis methods.

Sincerely,

Carl
 
This is why I cite other work when I talk about mine and not just what I did as it is obvious to anyone not completely naive that people will defend what they did. Thus in the above I cited the work of others which directly supports what I have stated.
In that case you had an academic responsibility to note that the test request was absurd and meaningless. When you actually perform it and cite it you are obviously enforcing the opposite. There is a whole host of issues there.

Since I'm getting lectured on how to perform experiments and publish results, I'll profess my inexperience.

I have exactly one peer-reviewed publication in knife sharpness testing. The citation of this paper is:

Sorensen, C.D., T.W. Nelson, S.M. Packer, and C. Allen, "Friction Stir Processing of D2 Tool Steel for Enhanced Blade Performance", Friction Stir Welding and Processing IV, R.S. Mishra, M.W. Mahoney, T.J. Lienert, and K.V. Jata, eds., TMS, Warrendale, PA, 2007, 409-418.

A preprint of the paper is available at this location.

Now, one paper in this field means I'm a novice. I'd like to learn from those with more experience. Cliff, since you have enough experience to tell me how I should behave, would you be so kind as to provide a list of your peer-reviewed papers in the field of knife testing? This would allow me to emulate you and better contribute to the science of knife testing.

Sincerely,

Carl
 
But you seem to want to harp on the financial aspects.

I mentioned it.

I challenge you to find anyplace that I either performed or cited the "rod test".

http://www.bladeforums.com/forums/showpost.php?p=4674412&postcount=140

You did not show that directly.

Parameters are undefined :

http://www.bladeforums.com/forums/showpost.php?p=4652989&postcount=103

Parameters are defined :

http://www.bladeforums.com/forums/showpost.php?p=4653019&postcount=104


You have never clearly described the "model parameters" that "won't converge".

I have told you the exact program I used, cited the text on the parameter error calculations and explained clearly that the uncertainties are so large the parameters are meaningless, i.e., the range of values predicted covers the absurd (the values invert +/-).


I'm a bit frustrated with your unwillingness to participate in the scientific process.

I have shown you the model, applied it to your data, told you the program I used. Described the exact algorithm in step by step detail. If you are still unclear then ask again and I will cite the posts where I already answered these questions.

... would you be so kind as to provide a list of your peer-reviewed papers in the field of knife testing?

In regards to paper journals, none. This is just a hobby. I started cutleryscience to do just that. The two review articles on it are peer reviewed, meaning I sent them to experts like Steve Bottorff and the other people I cited before they were on the website. I am assembling a list of people to act as formal reviewers now and there are other people who will be submitting articles on testing.

-Cliff
 
http://www.bladeforums.com/forums/showpost.php?p=4674412&postcount=140

Quoting from my earlier post, to which you link:
This test is an attempt to make an automatic test of one of Wayne Goddard's subjective tests.

I'm not sure what this test means, or what properties of the edge it measures, in either Wayne's implementation or Charles's implementation, so haven't really done anything with the data from that test.

Wayne believes, IIRC, that a soft edge will bend, a too-hard edge will chip, and a just-right edge will flex, then return, without chipping or bending. I believe that all of the blades in the test set we provided passed the test. Some of the FF parameter sets we tried resulted in chipping, others resulted in bending. But I don't have the data available.
I explained that others had done the test. I also explained that I had not done the test. I merely responded to a question about a test Charles had performed about which I had knowledge. This does not constitute either citation or performance.


Parameters are undefined :

http://www.bladeforums.com/forums/showpost.php?p=4652989&postcount=103
Here is the text of this link that you posted;
There is no statistical proof that the FF blade has superior edge retention because the blunting factors are too uncertain because of a combination of three factors :

1) lack of data in the initial region where the rate of blunting is high
2) lack of data in the tail region of S90V
3) high uncertainty in the data

In regards to the initial low cutting ability of S90V. Based on examining the actual data there is a misconception in the above. It was stated that the S90V blade does not cut as much material at the start. The first point measured is AFTER 20 CUTS. This is much too far too make such a statement as the initial rate of blunting can be very high for large carbide steels, especially if the edges are acute and/or polished. I would affirm if the cuts were measured at 2,4,8,16 then you would see a much higher initial performance from S90V.

The above data is also a very poor way to compare the blades because what it does it take two dependent variables which are highly noisy, divide them and magnify the noise and then plot essentially two dependent variables so now both the x and y axis are highly noisy. This actually reuquires very complex methods to fit properly even with simple models because this adds another element of nonlinearity into the problem. What should have been plotted is just the stroke count and amount of media cut. This would give a much smoother curve as the x axis variable is independent then and has no noise. It is also much more straightforward to rationalize as it just shows how much media is cut after a certin amount of passes.

The above graph attempt to show the sharpness after a certain amount of media is cut but as I noted it magnifies the noise and introduces an additional nonlinearity. I'll look at the other data shortly.
You assert that parameters are undefined, but you don't show which parameters from which fit algorithm. I've been asking this question now for two weeks, and never got an answer.

If by parameters, you mean the a, b, and c parameters, I would point out that the plot shows values of the a, b, and c parameters. The parameters are not undefined. The fit model did not fail to converge.

Parameters are defined :

http://www.bladeforums.com/forums/showpost.php?p=4653019&postcount=104
Quoting from your link:
Note how the plot is much smoother and more stable. This is also due to cumulative plots being inhernetly an averaging process plus not plotting two dependent variables. Fitting this data is MUCH more robust because of these two issues and note the model parameters are precisely defined.

Now based on the model and the results you can say that over the course of the material cut that the Friction Forged blade outcut the S90V blade by 30-50% more material under a given amount of work. This would be directly used by the used to infer how much more cutting you would be able to do with the knife with a given amount of fatigue.

It is also possible from the model (just use the equations) to find the other intercepts if so desired. If it is unclear then just ask and I can perform the required calcuations.

Again, you don't describe which parameters are good.

If by parameters, you mean the a and b parameters, I see that a and b parameters fit. However, I don't believe you've explained the model that you are fitting to obtain a and b. So I still can't replicate your work.

I have told you the exact program I used, cited the text on the parameter error calculations and explained clearly that the uncertainties are so large the parameters are meaningless, i.e., the range of values predicted covers the absurd (the values invert +/-).

I've asked for a mathematical definition of "absurd", and you've been unwilling to give it. The measured uncertainty (0.24) is smaller than the mean value (0.34). If you have some other criterion, would you please explain what you mean?


I have shown you the model, applied it to your data, told you the program I used. Described the exact algorithm in step by step detail. If you are still unclear then ask again and I will cite the posts where I already answered these questions.

You have agreed to my descriptions of step by step algorithms, with qualifications. I'd love to see links where you provided a clear step by step algorithm.

In regards to paper journals, none.

I'd be satisfied with electronic journals, as well.

This is just a hobby.
Amateur scientists can publish papers in journals, as well.

I started cutleryscience to do just that. The two review articles on it are peer reviewed, meaning I sent them to experts like Steve Bottorff and the other people I cited before they were on the website. I am assembling a list of people to act as formal reviewers now and there are other people who will be submitting articles on testing.

Quoting from an earlier post from Cliff about the articles on cutleryscience.com:
A lay person's summary. The math would be the exact responce to the data on multiple knives (CATRA result and sharpening frequencies for the users) and the lack of correlation between these and the systematic error of a blunting mechanism which is supported by published papers and also quite immediately obvious from first principles of basic physics as I outlined in the above.
Congradulations, you have deduced that an article written for lay people to introduce a fundamental blunting mechanism and the means to model it is not written in the same style as would be for a formal paper for a physics journal.

Cliff states that the papers on cutleryscience.com are not scientific papers. Further, cutleryscience.com would qualify as vanity publishing, not scientific publishing, since the website is in control of the author.

Cliff, I agree that you have extensive knowledge of knives. But I don't agree that you are the ultimate arbiter of how scientists should behave. Scientists expose their work to others and have it reviewed. When others ask for more information, they provide it. Scientists provide specific, measurable assertions. I've asked for specific, measurable results. The only specific results you have given me have been fits using gnuplot, and you have not described how you decide whether the uncertainty in the fit parameters is "too large".

When you have scientific, not lay, papers published through a peer reviewed process, then I might listen to you when you lecture me on how scientists behave. In the meantime, rather than having us lecture each other, suppose we just ask and answer technical questions as specifically as we can?

Thanks,

Carl
 
I'd better pop another vitamin so I can keep up with these posts, but I must say, the reading does shed some light on why we are knife addicts.
 
When you have scientific, not lay, papers published through a peer reviewed process, then I might listen to you when you lecture me on how scientists behave. In the meantime, rather than having us lecture each other, suppose we just ask and answer technical questions as specifically as we can?

CLIFF STAMP AVOIDS THE DIFFICULT QUESTIONS......PUTS FORWARD SOME OPAQUE CRAP, and RUNS LIKE A COWARD WHEN THE GOING GETS TOO TOUGH.

Good luck in getting a straight answer.

Thanks for the continued effort and commitment to furthering cutlery knowledge in such a fair and "NON-talking down to us lowly non PHD maroons" manner, Carl, it is appreciated.

Best Regards,

STeven Garsson
 
Hi,

might it be that Cliff's problem in describing this parameters etc
be that he's using some techniques from multivariate statistics?

i.e. things like mahalanobis distance, multidimensional scaling
or factor rotation to transform his "observed" parameter values
in uncorrelated=independent "artificial" parameter dimensions.



Of course even with above cleanup of the paramter space,
you've still the standard statistical problems:
number of observations should be >> number of dimensions,
testing not one but a few dozens blade per type, etc.

So you still should give/describe the ways you model might have
internal bias, what sample sets were too small, where did you
neglect to look at the underlaying distribution of parameters,
what kind of errors do you have in measuring a parameter, etc.

Having done that, and having documented the lies and bias in
my analysis and it statistics (starting at the shortcomings of the
mapping between model and reality), I also think such "self-analysis"
is a part of being responsible in science. But I rarely have read
reports/articles with non-trivial statistics that include such
a discussion section. Doesn't seem to sell well?

Seems to me that the saying "don't believe statistics you haven't
faked yourself" is true... . Not so much as being intentional lies,
but rather as having incompletely documented shortcomings.

But at least this thread DOES serve as an interesting discussion
section on the making of the friction forging statistics :)

cu
Peter
 
Hi,

might it be that Cliff's problem in describing this parameters etc
be that he's using some techniques from multivariate statistics?

i.e. things like mahalanobis distance, multidimensional scaling
or factor rotation to transform his "observed" parameter values
in uncorrelated=independent "artificial" parameter dimensions.



Of course even with above cleanup of the paramter space,
you've still the standard statistical problems:
number of observations should be >> number of dimensions,
testing not one but a few dozens blade per type, etc.

I suppose something like that could be the problem. But if it is, then all he needs to do is explain it. If he'd give a statistical/mathematical description of the problem, then I'd know how to respond.

Thanks for your input,

Carl
 
Since Cliff hasn't told me how he proceeds, I can't duplicate his work. So I'll show you what I would do to determine cut ratio based on the curve fits.

First, I want to point out that the specific values of the "blunting parameters" are of very little importance in this analysis, IMO. These parameters are just "semiempirical" (I would say "empirical", but that's a minor point of difference") parameters used to make a curve fit the data. Once we have the curve fitting the data, we can use the curve, even if there is uncertainty in the parameters.

Since the metric of interest here is cut ratio, or the relative cutting ability of two different knives at a given sharpness, we'll calculate this ratio, given the curves.

For each curve, we have C(x) = Ci/(1+ax^b). We solve the curve for x (which means the total media cut for a given sharpness C(x)). This results in

x=((Ci/C(x) - 1)/a)^(1/b)

We then calculate the cut ratio as x1/x2, where 1 and 2 represent the different blades.

I've plotted the data from this calculation below. It may be difficult to understand. On the X-axis, we plot the CATRA sharpness, in inches of media cut per stroke. Low sharpness corresponds to later in the test, so the curve appears "backwards" from what one would expect. The initial results are on the right hand side of the curve, and the final results are on the left hand side of the curve. On the Y-axis, we plot the cut ratio, or the total media cut for FFD2 divided by the total media cut for S90V. If the performance were equal, the cut ratio would be 1.

CutRatioPlot.jpg


As you can see, there is a very high initial cut ratio. This is due to the unexplained higher slicing sharpness for the FFD2 blade at an equivalent REST push sharpness.

Later on in the test, the cut ratio decreases, then finally increases. The minimum cut ratio is a bit over 5.

My earlier hand calculations showed a cut ratio of about 2. The difference between the two calculations is that this calculation is based on the average, or curve fit, estimate of the performance. The previous calculation was based on the maximum estimate of the S90V performance and the minimum estimate of the FFD2 performance.

What are the potential problems? I think I finally figured out what Cliff was concerned about. The a parameter for S90V has a mean value of 0.0016, with an uncertainty of 0.0019, which means it can't be shown to be statistically significant to be different from zero. If these parameters were independent, we'd make it zero, and recalculate the curve. However, the covariance matrix of the parameters shows that the parameters are not independent, so we can't just toss it out.

I don't attribute this parameter value to "noise", as Cliff does. Rather, I attribute it to the fact that the measured data for the S90V doesn't appear to have the same shape. I think this is what Cliff meant when he said we didn't test the S90V long enough. But we were only interested in the data during the sharpest portion of the blade life.

In future testing, I'll be more careful to do multiple repeats, micrographs of the blades, etc. I agree that the data could be better. But I also believe that the data does tell a story and have some information, and that the FFD2 performance is better than S90V.

Carl
 
I think I finally figured out what Cliff was concerned about. The a parameter for S90V has a mean value of 0.0016, with an uncertainty of 0.0019, which means it can't be shown to be statistically significant to be different from zero. If these parameters were independent, we'd make it zero, and recalculate the curve. However, the covariance matrix of the parameters shows that the parameters are not independent, so we can't just toss it out.

Carl,
Does this indicate the data correlation is bad, or does it just indicate the results do not fit acceptably into Cliff's theoretical model?
 
Back
Top