Maybe you can maintain 50% consistency, but the scatter will not be as low as you claim, due to it being a non-random phenomenon. I know all about judging distances, and have worked enough with small machine parts to be able to judge distances down in the deca-mil range. Does that mean I can maintain a
force that consistently? Of course not. You are comparing purely sensory phenomena to motor-sensory phenomena.
bald1, is this maybe the kind of thing you are talking about? A little bit of spin to make himself sound like he knows what he's talking about, when he really doesn't? Not quite the same as spinning other people's words, but still a type of spin (using actual terms, examples, etc., but applying them in a way they don't actually apply).
Want to do an experiment? Take a digital gram scale. Watching the readout, apply a force of 1 kilogram. Release and repeat several times. Now, turn the readout away from you, and have someone else record the results (without letting you see). Try to apply that same 1 kg force 1000 times, and see what kind of scatter you get, and if there doesn't seem to be a pattern.
re: expectation
They would seem to be revelations to you. You mention nothing of the sort in your reviews, and claim to have a reasonable amount of scientific objectivity. Again, unless you can arrange a double-blind trial (don't see how, since your method still allows you to know which knife you are using, and there really isn't any reasonable way to prevent you from knowing that without otherwise compromising the results), you can't claim that this is a scientific procedure.
Using time as a pseudo-random number might work, if you used something subtle like miliseconds, but gross measurement like fractions of minutes are easily within the level at which you exert a subconscious influence. Anyone who can say 'one-Mississippi, two-Mississippi,' etc. has experienced this natural ability to determine time intervals. Waiting to calculate averages falls into the same pitfall: your brain performs subconscious calculus in order to catch a thrown object; do you really think it has any trouble taking an average on the fly?
If you want to claim scientific objectivity, and still use sigle-blind (or less) methods (and, those really are the only type available for most knife tests, other than comparing completely identical blades made from different steels, but otherwise indistinguishable), you must use a large number of testers (not just a few friends).
re: training:
Reading a few mat. sci. books, and having a few discussions with ME's does not qualify you for failure analysis. It is one of the most intensive and esoteric fields in the whole of engineering. I wouldn't hire an engineer to preform failure analysis without at least a Master's degree or two, and I would probably require that this person has worked for several years in various related fields, unless I was truly impressed with his/her references.
re: complexity:
You did not say that fatigue is a complex issue. You specifically said that it is only two simple variables.
re: extreme tests:
You have also said that extreme tests can replace low-stress testing. Which one is it? And using extreme tests at all requires careful control, as even slight variances can produce widly differing results. For example, a friend was recently working on an RCS thruster for a missile. This one has to be particularly compact, producing upwards of a thousand pounds from a unit approximately the size of a 'D-cell' battery. They predicted a 25% variance between their model of how the thruster would perform, and the actual data from the first test-firing. They achieved a 10% difference, and were nearly in tears. The lesson? Extreme tests are very hard to predict, implement, and draw conclusions from.
bald1, I think we're pretty much on the same page here. While Cliff's tests might be useful 'real-world' tests, he claims that his tests are scientific, and therefore better-than the real-world tests. If he presented them as real-world tests, I would have no problem, but he wants to supplant others' real-world testing with his own, carefully disguised as science. Both as a knife user who values having a wide variety of real-world anecdotes to draw upon, and as a scientist who values data passed off as scientifically accurate to be just that, I find this quite annoying. On the one hand, it seeks to drive out other real-world testers, and on the other, it undermines the validity of true scientific data.
--JB
------------------
e_utopia@hotmail.com