Edge Retention Testing and Analysis

Push cutting sharpness is different from slicing sharpness.

We accept the fact that push and slice sharpness may be different!:yawn: :yawn: :yawn: We are not using the CATRA to measure sharpness. I'll state it again for you: We used the CATRA as a means of producing physical wear on the blades.

The calculations on the raw data to produce the plot.

What part of "RAW DATA" do you not understand Cliff? There were no calculations or manipulations. Again, we presented RAW DATA!:yawn: :yawn: :yawn:

tnelson, I described this in the webpage where I developed the equations from first principles. I have also shown how they model the real data that I have collected, that others have collected and even work on other applications such as dental scrapers.

I do not see any first principles of physics or engineering in this reference or in any of your posts. However, you continue to make reference to "physics" or "mechanics". Would you please please show/describe in physics or engineering principles the "physics of the mechanics of blunting"

TN
 
Push cutting sharpness is different from slicing sharpness. You can be very high in one and low in another, thus if you are testing slicing ability you would look at slicing sharpness.

I understand, and I have clearly differentiated between the two different sharpness measures. The assertion that I am challenging is when you assert that slicing wear can't be used with pushing sharpness.


Like has no part in this discussion and this isn't a personal preference. I like the taste of mangos, however SVD is a better choice for systems of linear equations because of being more robust. This is math, not opinion.

Sure, but we're not talking about the quality of the fits, which is objective data. You said that a mechanical test would only find "which knife is best for a machine to use". Unless you have data (which I have not seen presented) that shows either a correlation or a lack of correlation between mechanical tests and human performance, the statement that machine tests are irrelevant to human use is just an opinion, and is not mathematically or physically based.

I have said the mechanical testing has a systematic bias, this is just math.
Human testing is fine, you just need to account for the personal preference bias with methods as I have noted. THis again is just math.

You explained the need for double-blind testing in human tests (although you never used this commonly accepted term). But I've never seen where you show the systematic bias in machine testing. I'd be delighted if you could point me to a reference, or give me a clear mathematical explanation of this systematic bias.



The calculations on the raw data to produce the plot. Manipulation is likely a poor choice because you seem to think it implies improper behavior. It can, but that wasn't how I meant it there. Transformation would have been a better term.

I'm sorry if I contributed to any misunderstanding. I can easily see that "manipulation", like "plotting", is a word that can be misinterpreted.

But I don't think you understand my points. There were no calculations or transformations to produce the plots. All of the plots consisted simply of raw data.

As a simple example, consider this transformation :

f'(x_n)=f(x_n+1)-f(x_n)

THis is a well known transformation and it will EXPLODE noise and would never be done on experimental data. You would model the base data and transform the coefficients if required. This is all really basic modeling procedure.

Actually, in its correct form of f'(x_n)=(f(x_n+1)-f(x_n))/((x_n+1)-x_n), this transformation is done quite often on experimental data where the noise is low. It allows one to calculate the derivative with only two data points, which makes it the quickest possible calculation. Of course, as one includes more and more data points, the estimate of the derivative gets better, but at the expense of the amount of data than needs to be collected before the derivative can be estimated. The specific choice of derivative calculation methods is a subjective choice, based on real-world tradeoffs, not fundamental physical laws. Of course, there are consequences to the choice.
 
You would model the base data and transform the coefficients if required. This is all really basic modeling procedure.
-Cliff

I agree, so I'm not sure why you calculate cutting ratio on raw data instead of using the model of the base data. I will be using the model of the base data to calculate the cutting ratio of FFD2 compared to the other test blades.

Carl
 
What part of "RAW DATA" do you not understand Cliff? There were no calculations or manipulations.

This is a lie, column H in the spreadsheet listed by you clearly shows it to be a calculation. Raw data means the quantities were just as recorded with no calculations, the direct observed physical quantity recorded.

I understand, and I have clearly differentiated between the two different sharpness measures. The assertion that I am challenging is when you assert that slicing wear can't be used with pushing sharpness.

Ok this is a prefectly valid challenge. Now I ask you the following question, if you accept that they are two different physical quantities, then you also accept that they will behave differently? Now don't you see an obvious problem with a knife which is blunting by slicing and you are quantifing the rate of blunting (loss of sharpness) by measuring the wrong type of sharpness?

Unless you have data (which I have not seen presented) that shows either a correlation or a lack of correlation between mechanical tests and human performance, the statement that machine tests are irrelevant to human use is just an opinion, and is not mathematically or physically based.

I gave you two examples and one is public the other one will be shortly. Plus I find it completely unreasonable that you actually contend this because it shows a complete lack of understanding of even basic physics and general issues of biomechanics. Plus like I said this has been known and published in german for over 50 years, you really think I just made up all those references. That is just absurd.

...give me a clear mathematical explanation of this systematic bias.

This is again actually unreasonable because the burden of proof is on you to actually prove the correlation. But as I have noted examples of where it fails. Buck admitted this publically and noted that they had to stop and switch to human testing because the CATRA results were meaningless on the Ionfusion blades. Now there is no way you can't figure out why and realize that is a huge systematic bias in general. You have to be trolling here.

I have in fact shown many times how the same principle makes comments like "I have cut nails in half with XXX knife with no damage." completely meaningless. Again, this is all first year physics, nothing advanced. There is no way anyone with any level of engineering background would not see this as perfectly obvious. Plus anyone who used a knife would know the problem right away. It has in fact been described as a problem on the forums for years (the same concept I mean). So all serious knife users are aware of it.

Actually, in its correct form of

I stated a difference function, it was perfectly valid as are all transformations within their domains. It will however blow up the noise as I noted. Difference functions are also not used when the data noise is low but when the noise is low compared to the difference. Otherwise your slope function will be undefined and just oscillate about zero.

The specific choice of derivative calculation methods is a subjective choice, based on real-world tradeoffs, not fundamental physical laws.

No it isn't a subjective choice. Really, that is how you see chosing statistical and numerical methods? Wow, that is one of the worst interpretations I have seen in regards to numerical analysis. This is all supposed to be defined using math, even the choice of something as simple as to include or not include a data point in the analysis. It isn't subjective at all.

I agree, so I'm not sure why you calculate cutting ratio on raw data instead of using the model of the base data.

Because the model assumes something and you don't need to. It shows you exactly what the data says with no assumptions on behavior and allows you to calculate the uncertainty in the ratios directly from the uncertainty in the data. From the models you would have to do a fairly complex calculation on the parameters, uncertainties and correlation coefficients and quality of fit and this would only be valid if the model is true. From the data there is no if, that is just what it shows directly.

-Cliff
 
This is a lie, column H in the spreadsheet listed by you clearly shows it to be a calculation. Raw data means the quantities were just as recorded with no calculations, the direct observed physical quantity recorded.


I gave you two examples and one is public the other one will be shortly. Plus I find it completely unreasonable that you actually contend this because it shows a complete lack of understanding of even basic physics and general issues of biomechanics. Plus like I said this has been known and published in german for over 50 years, you really think I just made up all those references. That is just absurd.

Do you have any idea how directly insulting you were up above?

Do you really think that an MIT graduate and department Co-chair of an institution as prestigious as BYU has no understanding of basic physics?

I have no understanding of physics, but who in the hell do you think you are to come off like that?

Was this intentional, or was it simply an accident?

It seems at the least, beneath the realm of professional academics working out a difference of definitions in a public forum.

Best Regards,

STeven Garsson
 
"The calculations on the raw data to produce the plot. Manipulation is likely a poor choice because you seem to think it implies improper behavior." (Cliff in todays earlier posting)

Soooo...

"Now I am going to show you how this is an example of really misleading and biased statistical analysis."...

"This is why I have said many times that what you do has no bearing on if something is "scientific" or not, but how the data in interpreted and what conclusions are drawn. This is an example of a lot of precise numbers which are not utilized properly and the conclusions presented are not rigerously supported by the analysis.
-Cliff" (first posting on the first thread)

"My main point was that their analysis is very selective." (posting #9 first thread)

"If you want to look at selective parts of the data you can support any conclusion you wish to propogate. That is one of the common ways statistics are used to distort reality." (posting #11)

"This is another problem with the analysis. If the blades are of identical geometry and sharpness then the initial cutting ability has to be the same, since the latter is false so is the former." (posting #17)

"...wants an unbiased comparison." (posting #25)

"...you can easily pick the same method which will bias the results in favor of a steel..." (posting #30)

"There is a huge logic problem here..." (posting #36)

(And this is only the first two pages of the discussion! Almost every post included some negative connotation about the work done!)

Gee, wonder where we got the idea that "manipulation" is a bad term? Ok, he didn't use that word here, but it is what he is saying!


Cindylou

PS "Essentially, it is like torch hardening the edge." (#3)

Essentially he's wrong. Did he ever get that? Will he?

PPS "Nonsense, the ones who are confident in their results release the information like Glesser. All such analysis is SUPPOSED to be subject to critical analysis. Science doesn't proceed on faith and blind worship."

So why won't Cliff just open his stuff to analysis? Oh yeah, it's too hard for us to understand and we're supposed to just believe HIM!
 
"Do you really think that an MIT graduate...has no understanding of basic physics? "

Thanks Steven!


Cindylou
 
This is a lie, column H in the spreadsheet listed by you clearly shows it to be a calculation. Raw data means the quantities were just as recorded with no calculations, the direct observed physical quantity recorded.

You are right; cut per stroke is a calculated quantity. My bad.

Ok this is a prefectly valid challenge. Now I ask you the following question, if you accept that they are two different physical quantities, then you also accept that they will behave differently? Now don't you see an obvious problem with a knife which is blunting by slicing and you are quantifing the rate of blunting (loss of sharpness) by measuring the wrong type of sharpness?
Because every model I have seen supported by data shows push cutting sharpness to be a strong function of edge width or edge radius. And the means chosen in our work to increase the edge radius is, in my opinion, a reasonable representation of typical uses of hunting knives.

There is not a mathematical basis for my assertion, because I have not provided any correlation or other mathematical proof. However, I am aware of no mathematical basis for denying the assertion, because I am aware of no mathematical data that proves a lack of correlation.

You have asserted that Buck's experience with Ionfusion blades is mathematical evidence. I'm not aware of Buck's experience, and asked you to provide a reference, since you are. But you seem to be unwilling to.

I have already mentioned, both in the paper and on the forum, that I don't believe CATRA ERT media provides results representative of typical human use, because of the embedded silica. So having the CATRA ERT test be poorly correlated with human use is not surprising to me.

It was you who made the assertion that machine testing can't be valid for defining human performance. Since I've not seen any data supporting this claim (just an allusion to a Buck test with different wear media), I repeat that we have a difference of opinion here. It's just opinion. Neither one of us has provided math or physics supporting our positions. Neither one of us has supported data. And I believe my opinion ought to carry as much weight as yours, although you're free to disagree.

I gave you two examples and one is public the other one will be shortly. Plus I find it completely unreasonable that you actually contend this because it shows a complete lack of understanding of even basic physics and general issues of biomechanics. Plus like I said this has been known and published in german for over 50 years, you really think I just made up all those references. That is just absurd.
No, you didn't give me two examples. You referred to two examples in general terms, when I asked for mathematics. You assert it's all mathematics. I haven't seen the mathematics. I asked for a reference; you gave a vague answer. I'm still waiting for mathematical proof, not just reference to authority (e.g. everybody knows, the germans knew it for 50 years, if you had any knowledge of physics you'd know). If it really is math, please just show me the math. I think I've demonstrated the ability to understand your math, even when you try not to explain it clearly.

This is again actually unreasonable because the burden of proof is on you to actually prove the correlation. But as I have noted examples of where it fails. Buck admitted this publically and noted that they had to stop and switch to human testing because the CATRA results were meaningless on the Ionfusion blades. Now there is no way you can't figure out why and realize that is a huge systematic bias in general. You have to be trolling here.
I clearly have a burden of proof if I want to assert that there is correlation. I have not asserted there is correlation. I have clearly stated that it is my opinion that there is correlation, but it's not proven mathematically.

However, you assert a "huge systematic bias in general", and assert that it's based in math and physics. So I'm still waiting for a mathematical demonstration of the "huge" bias you contend exists.



I have in fact shown many times how the same principle makes comments like "I have cut nails in half with XXX knife with no damage." completely meaningless. Again, this is all first year physics, nothing advanced. There is no way anyone with any level of engineering background would not see this as perfectly obvious. Plus anyone who used a knife would know the problem right away. It has in fact been described as a problem on the forums for years (the same concept I mean). So all serious knife users are aware of it.
Exactly what principle are you referring to here? Is it a fundamental physics principle? A fundamental mathematics principle? If so, it will probably have a name, like "conservation of energy", or "Newton's first law", or "the fundamental theorem of algebra". If you will explain what the principle is, then we can have a discussion about whether it applies to this situation.

As far as I know, many serious knife users on the forum place great stock in rope cutting tests like Wayne Goddard's. When I read Blade magazine, it's standard for knife reviewers to test cutting ability by cutting hemp rope. On your sharpness review page you refer to rope slicing tests that are generally accepted. On the basis of these tests, I believe the physics of cutting rope to be generally similar to the physics of general use for a hunting knife, although I can't prove it mathematically.

I stated a difference function, it was perfectly valid as are all transformations within their domains. It will however blow up the noise as I noted. Difference functions are also not used when the data noise is low but when the noise is low compared to the difference. Otherwise your slope function will be undefined and just oscillate about zero.

So you agree that difference functions are used when the noise is low compared to the difference. I think that's just what I said, but with an additional qualification.


No it isn't a subjective choice. Really, that is how you see chosing statistical and numerical methods? Wow, that is one of the worst interpretations I have seen in regards to numerical analysis. This is all supposed to be defined using math, even the choice of something as simple as to include or not include a data point in the analysis. It isn't subjective at all.

Would you point me to a single reference where somebody who is a recognized authority on Numerical Methods, other than Cliff Stamp, says that the choice of numerical method to apply to a problem is defined by first principles?

I don't believe you can find one. In applying numerical methods, there is almost always more than one way to perform a numerical calculation. Some are more precise than others; some take longer than others. Some work well with one kind of data; others with other kinds of data. As an example of this, you can refer to Numerical Recipes, Section 4.6 on multidimensional integration, where they describe the tradeoffs to be made in determining the integration method.

The calculations on the raw data to produce the plot. Manipulation is likely a poor choice because you seem to think it implies improper behavior. It can, but that wasn't how I meant it there. Transformation would have been a better term. As a simple example, consider this transformation :

f'(x_n)=f(x_n+1)-f(x_n)

THis is a well known transformation and it will EXPLODE noise and would never be done on experimental data. You would model the base data and transform the coefficients if required. This is all really basic modeling procedure.

Because the model assumes something and you don't need to. It shows you exactly what the data says with no assumptions on behavior and allows you to calculate the uncertainty in the ratios directly from the uncertainty in the data. From the models you would have to do a fairly complex calculation on the parameters, uncertainties and correlation coefficients and quality of fit and this would only be valid if the model is true. From the data there is no if, that is just what it shows directly.

Cliff, you can't have it both ways. It's either better to model the data before the transformation, or to do the transformation before the modeling. Or, alternatively, if sometimes you should model first, and other times you should transform first, then you need to admit that there are choices to be made. And if you can't give me specific mathematical reasons (which should include the results of calculations on the data), then in my opinion you're admitting that there is a subjective component when deciding how to transform the data to derive meaning from it.

Carl
 
Ok this is a prefectly valid challenge. Now I ask you the following question, if you accept that they are two different physical quantities, then you also accept that they will behave differently? Now don't you see an obvious problem with a knife which is blunting by slicing and you are quantifing the rate of blunting (loss of sharpness) by measuring the wrong type of sharpness?

We quantified the rate of blunting with both sharpness methods: push cutting and slicing. We used push cutting to determine when shaving sharpness was lost. The method we used to wear the blade gave us a slicing sharpness measurement as well, so we included it.

I can understand why you might have concerns about using the slicing sharpness data. The most obvious reason is that the slicing sharpness of FFD2 was significantly higher than the slicing sharpness of the other blades, even though the push sharpness was equivalent. And we don't have a good answer for why that is. I recognize that limitation. But the data is repeatable, and appears to be also consistent with user experience. So we put it out.

I'm firmly of the opinion that the best thing to do is to make data as accessible as possible, and let people disagree with our analysis by performing alternative analyses. I'm still waiting for your analysis that disproves our contentions of superior performance of FFD2.

Thanks,

Carl
 
I recognize that limitation. But the data is repeatable, and appears to be also consistent with user experience. So we put it out.

Cliff,

Carl can admit that he has limitations. When was the last time you admitted on bladeforums that you have limitations or were wrong?

cindylou
 
This is a lie, column H in the spreadsheet listed by you clearly shows it to be a calculation. Raw data means the quantities were just as recorded with no calculations, the direct observed physical quantity recorded.

Thank You!! Thank You! By answering the question directly/specifically, I have come to a better understanding of your interpretation of "manipulating". This type of answer is very helpful.

You are absolutely right; we performed a "calculation". I apologize; I had forgotten that we made this calculation. Yes, I consider this is a calculation.

In our calculation, we divided the difference in total media cut by the number of strokes. The total number of strokes is a specific number. We did the same for all blade materials tested. This type of calculation will scale the data. Because the same calculation was done to all materials in the same manner, all materials tested will “scale” the same.

Even after performing your own analysis on OUR data, your analysis show the same trends and same order of magnitude difference. (Post #103 and #104 on the previous thread :address listed below)

http://www.bladeforums.com/forums/showthread.php?t=476782&page=6

Given this, I still fail to see:
........... how this is an example of really misleading and biased statistical analysis.

I applaud you for answering my question so directly. If we can continue to answer question this directly, it will most certainly facilitate our discussions. For the past week we have been trying to understand you approach. I felt that Carl and I have been direct in our questions. If you will answer our questions as direct as you answered my last, I believe we will be able to come to a mutual understanding of each others position and analysis approach. This is called peer review. This is how "science" progresses. Without full mutual disclosure, our knowledge and understanding of edge retention will not progress.

Now, I don't want to be annoying, but I want to restate my previous inquiry:
I do not see any first principles of physics or engineering in this reference or in any of your posts. However, you continue to make reference to "physics" or "mechanics". Would you please please show/describe in physics or engineering principles the "physics of the mechanics of blunting"


TN
 
Would you point me to a single reference where somebody who is a recognized authority on Numerical Methods, other than Cliff Stamp, says that the choice of numerical method to apply to a problem is defined by first principles?

I don't believe you can find one. In applying numerical methods, there is almost always more than one way to perform a numerical calculation. Some are more precise than others; some take longer than others. Some work well with one kind of data; others with other kinds of data. As an example of this, you can refer to Numerical Recipes, Section 4.6 on multidimensional integration, where they describe the tradeoffs to be made in determining the integration method.
Carl

Thank you.

It was you who made the assertion that machine testing can't be valid for defining human performance. Since I've not seen any data supporting this claim (just an allusion to a Buck test with different wear media), I repeat that we have a difference of opinion here.
Carl

I have never heard elsewhere that using a machine to test for material properties is inherently flawed. Why would we, and why should we define edge retention to include factors of human performance? Why wouldn't we define it similarly to any other standard and proven material property, in absolute terms?

How does "human performance" factor into our understanding or tests to determine materials properties? Is hardness defined in terms of human performance? Is compressive strength? No. And these tests are done using a machine.

To my thinking it increases noise and decreases the accuracy of the test to substitute a human "test machine" for a well designed machine. It would immediately add multiple variables to the equation - first to come to mind is it would add force of cut and length of slice as variables to our list of variables, where they can be held uniform with a machine.

If I wanted the human factors added into the definition, I would take the 1st and 2nd place knives from Steelhed's 1st invitational cutting contest, and let Goddard and Wilson do follow up testing.
 
Plus like I said this has been known and published in german for over 50 years, you really think I just made up all those references. That is just absurd.

Of course I don't think you made up those references, and I never accused you of doing so. I don't speak German, and I'm not familiar with the 50 years of literature you spoke of. You are. So, as a professional courtesy, I asked you to help me get started in the literature by asking for a couple of good references. You promised to provide them when your review is done. I'm looking forward to seeing that couple of best references in the future.

In the meantime, unless you cite references, the arguments you make based on those references are meaningless to me, because I'm not familiar with them.

Thanks,

Carl
 
Cliff, you can't have it both ways.
Carl

Of course I don't think you made up those references, and I never accused you of doing so.

You must be new here ;)

Carl, I hope you TNelson and Wayne Goddard don't get fed up and leave because of......him.
The 3 of you have been forthcoming, polite and well mannered.

I thank you and welcome your continued participation at the forums.
 
cds4byu - Sure, I would love to see one and use one if possible. I will be gone parts of August though so hopefully our paths will cross. Let me know when you are in Anchorage and I will come out to the airport - before you go through security of course. ;)

Don't get too frustrated with Cliff. He rarely says anything positive, and usually takes every statement to task. Like my post about competitions. He blasted me for saying you could create a competition limiting human involvement even though I said that I would prefer human involvement right after that. He doesn't bother me anymore. I cannot understand his test methodology at all, his writing is often obtuse, and his graphs are like Dylan lyrics, impenetrable. Of course there is the entertainment value I suppose. :)
 
Push cutting sharpness is different from slicing sharpness. You can be very high in one and low in another, thus if you are testing slicing ability you would look at slicing sharpness. -Cliff

We accept the fact that push and slice sharpness may be different!:yawn: :yawn: :yawn: We are not using the CATRA to measure sharpness. I'll state it again for you: We used the CATRA as a means of producing physical wear on the blades.
TN

I understand, and I have clearly differentiated between the two different sharpness measures. The assertion that I am challenging is when you assert that slicing wear can't be used with pushing sharpness.

Actually I think that the method you are using for testing the FFD2 blades correlates quite well to the way an average person would typically use a knife and check to see if it is still sharp. In my experience (and no it isn't all encompassing and I have no statistical data to back it up), the average person (not someone performing a specialized task) tends to do more slicing than push cutting. When they check to see if the blade is still sharp, they don't generally check to see if it is sharp by slicing something with it (that's what they were doing when they started to wonder if the knife was sharp), and they damn sure don't whip out a scanning electron microscope (or even a 20X loupe, although I guess I do know a few people that might run to their SEM). They will frequently check to see if the blade will still shave, which as far as I know is push cutting. The other method I'll see often is pressing lightly on the edge with a fingertip, dumb, but still push cutting.

In any case the test seems like it is a machine test that a human can easily relate to, blunting by slicing, testing sharpness by push cutting.
 
Actually I think that the method you are using for testing the FFD2 blades correlates quite well to the way an average person would typically use a knife and check to see if it is still sharp.

In any case the test seems like it is a machine test that a human can easily relate to, blunting by slicing, testing sharpness by push cutting.

That's exactly why we came up with the test.

Carl
 
Cliff, I am having trouble understanding your model and fits.

If you look at the graph and fits you show in the other thread, the curves seem well separated. The points seem to fall nicely on the curves. Yet you claim these curves are statistically indistinguishable.

The curve fit for the FFD2 shows the parameters:
a = 0.34(24) b = 0.36(7) c = 0.79(2)

If I assume these match your model with c=Ci, a=a, b=b, these parameters reproduce the curve you show. However, if I assume the numbers in () are error on the parameters (?) I get a family of curves that is clearly not supported by the data. For example, simply changing the parameter c from 0.79 to 2.79 raises the graph well above the curve shown. Changing b from 0.36 to 7.36 takes the output to essentially zero after 7 cuts.

The fits look good, but your estimate of the uncertainty in the fit looks bad. What do you assume is the error in each data point when you do the fit?
 
So why won't Cliff just open his stuff to analysis?

Lie: My results have always been public, even the data on results.

You have asserted that Buck's experience with Ionfusion blades is mathematical evidence.

No Chuck Buck did himself directly (from memory) "We gre bored waiting for the knives to blunt on the CATRA machine, but in REAL LIFE it is otherwise." He specifically mentions the effect of forces on the edge which blunt by a mechanism other than wear. He even notes the exact cause (correctly) and makes it public. This was years ago on the forums.
Exactly what principle are you referring to here?

A human hand will have a much higher variance in the stroke vector (magnitude and direction both) for the cut path. This induces significant forces on the edge laterally. This was why such machine testing was quickly noted to be problematic in germany and other methods used to predict the performance of blades when used by people. Ideally, and this is the obvious part, you just use people (with as noted the correct methods to prevent systematic bias).

...there is almost always more than one way to perform a numerical calculation.

This is just math, you state the judgement criteria and it isn't opinion. No more than stating that if you are looking for a steel to take shock then S7 is better than T15.

...you're admitting that there is a subjective component when deciding how to transform the data to derive meaning from it.

This again would be math. For example you can transform a function to make it faster to calculate (less numerical operations) or more robust (less parameter correlation). These again are not opinons.

And we don't have a good answer for why that is.

I showed years ago how you can make a blade progressively lose slicing aggression while maintaining a high standard of push cutting sharpness. It is strongly dependent on the carbide nature vs the abrasive.

As an aside, if someone was to perform an evaluation on your product would you want them to go forward publically with the result if they were found to not be in your favor and yet had a number of concerns such as the above which could in fact be a misleading conclusion based on less than optimal methods?

...we performed a "calculation".

Unless you are performing analysis by magic, all of what is done to data, filtering, smoothing, modeling, etc., is all calculations. You however don't refer to modeled data as "raw".

As noted, I described the physics underlying the model in the same page as the equation is first described.

I'm not familiar with the 50 years of literature you spoke of.

I don't speak german either and have only a brief working outline, as noted I am doing the survey now. I gave you a couple of key authors, did you even to a search on them and read their relevant papers?
a = 0.34(24) b = 0.36(7) c = 0.79(2)

This means 0.34 +/- 0.24, 0.37 +/- 0.07, and 0.79 +/- 0.02.

What do you assume is the error in each data point when you do the fit?

They are set to the standard error of the mean of each data. As noted all data sets are collected several times.

-Cliff
 
This means 0.34 +/- 0.24, 0.37 +/- 0.07, and 0.79 +/- 0.02.
Looking again at the fits, b & c are well separated. a has some overlap... What kind of difference would you hope to see in your model to claim a difference in performance?

Based on this model though, some of the claims in the paper would be questionable. Unless the knives start with the same sharpness (Ci), it doesn't make much sense to look at sharpness values later. This is just translating the curve up or down.

They are set to the standard error of the mean of each data. As noted all data sets are collected several times.
OK - I looked at the FFD2 test data. Most of the testing was done only twice. The average error for the data points (average of standard deviation divided by the mean) is less than 4%. If this is noisy data, what level of noise do you find acceptable to support the modeling you suggest?
 
Back
Top