You would do that in the initial stages to verify the robustness of the of the fitting, I do that as well, but I was speaking of using monte carlo methods to generate pseudo data sets to show the results that the noise will have on the inferences. You generally only do this when it is difficult to calculate the uncertainties directly from the correlation matrix. But now you can generate 100 data sets and look at the model spread graphically instantly on even a regular home system. It used to take me longer just to type in the command to initiate the program than it would be for the program to update the graph.
You clearly don't understand my style of coding, that would be a cruel thing to force on someone. The algorithm is actually quite simple, it is just brute force methods on curve intersection. For each point on one curve you find the point on the other curve which bound it, i.e. y1(x1) < y2(x) < y1(x2). Start off assuming the same point and then just go up or down as required to find the bounds. Now do a linear/spine approximation to get the approximate intersect value, i.e., y1(x')=y2(x).
This ratio x/x' is then the cut ratio, which means quite simply how much more material one blade can cut than another to a given sharpness. That is what most people would be interested in (the sharpness ratio at a cut length is of course trivial to calculate y2(x)/y1(x)). This procedure is repeated for the domain to generate the cut ratio data set. Now to take into account the noise in the data you do a montecarlo simulation on the raw data to thus produce a set of cut ratio values and you can present the mean values of these.
I started doing this because I didn't like the fact that the cut ratio was dependent on the model of the data regardless of how it fit so I made its calculation independent. The model I still use because I want to determine how to actually calculate the parameters from the physical properties of the steel, wear resistance and so on. Plus I wanted to make a point that you will see the same general trend in blunting no matter what media, method of cutting, grit, angle, etc. .
I also want to make it clear that the process is nonlinear because as it very clear in the above almost no one understands what this implies and hence the absurd statements which are made which are very unrealistic and even insensible. There needs to be an understanding that saying steel A has 10% better edge retention than steel B is just nonsense given the nonlinearity of the problem. You can't average either, it is just as meaningess. You can of course give a range but you have to be clear always about what you are saying.
Unless they are the exact one I noted y(x)=c you can not use an average to compare. The reason I use the model I do to fit sharpness/cutting ability is because it is based on the physics of blunting, it isn't an emperical model. This is a very critical point. Of course there are many emperical models but these are very different, a simple infinite power series will fit any function obviously, this is basic calculus. But this has no basic in physical modeling which is what I proposed.
-Cliff