Talonite Good, Bad or Ugly?

Status
Not open for further replies.
Cliff,

I like to read your posts/reviews and find them interesting. But I want to ask a few questions.

The point of inferencial statistics is to make an implication of a population using a sample. What concerns me with your reviews is that the sample size you use is 1...if I am reading your reviews correctly. Statistics don't apply in this case, except descriptive statistics...or if you do apply them the reliability in the implication is drastically reduced. The margin of error is so large that you can not say anything significant...unless of course you randomly choose a perfectly average singleton sample, but the probability is very small that you would do that (since none of the singleton samples have to be average!). How do you overcome this...how are you establishing your tests? Another question I have is how are you measuring the force of application so accurately? What is your margin of error (variation) of the application? Curious.

Jeff Jenness
 
Jeff :


The point of inferencial statistics is to make an implication of a population using a sample.

In general, yes of course.

[sample size of one]

The margin of error is so large that you can not say anything significant

It is very true that if you chose a sample size of one from a population of an unknown variance then you cannot make a meaningful conclusion about how in general samples will behave as while you can readily determine your samples parameters you have no way of knowing how much inherent variation there is in the population.

However the variance in the population of a blades performance is not unknown. Common sense would tell you that if it was very large, like say 50%, there would be a lot of complaints about the makers QC. One person would handle a blade, based on that buy one and then find out it was widely different from the one he handled. I brought this up awhile ago and most makers will tell you from QC testing they have done the variance in blade behavior is very small.

Of course the simplest solution is to ask the maker - about the review. As a customer just say "does that review indicate the performance I should expect". If the maker guarantees it does then the confidence level you have to work with is 100%. If they don't let me know as I will put a disclaimer to the review to this effect, basically buyer beware as the maker does not support my description.

How do you overcome this

The ideal solution so the reviews would be able to determine on an independent basis the population variance and thus the QC of the maker would be the next time someone asks me if I would do a review I say "sure, please send me ten random blades please". I would really like this as besides the QC it would make a lot of work much easier if I had blades that could be left sharp and unused for controlls, examine different finishes at the same time etc. . However realistically that is not going to happen obviously and it is rare that I have multiple blades from the same maker to enable conclusions to be drawn about the population.

There have been cases where I have looked at it though. For example I have two Stellite 6K knives from Gerber, same model, they are very close in cutting performance, well below the ideal significance level I could hope to achieve with hand testing (5%). However one has a warp in the blade and one does not. So you can draw conclusions, not overly precise ones from a sample size of two, but still it is better than nothing, about the QC finish wise and cutting wise.

I have also used 7 1095 blades from Ontario, the variation in that sample was immense. Some were that brittle they snapped and chipped, some so weak they easily bent, and some were perfect. I also have on loan three blades from David Boye all show very precise fit and finish, clean grind lines, even work, smooth transitions etc. .

Another question I have is how are you measuring the force of application so accurately? What is your margin of error (variation) of the application?

This is the biggest problem I have right now. I have tried using just the force of the blades weight when doing the fabric slicing to guage the cutting performance. The problem with this is that of course the heavier blades will have an advantage over the lighter ones. After thinking about this I figured that if I compared percentage edge loss the size of the bias would be greatly reduced. However I am not completely happy with that.

I have also tried using a set amount of force and have found that I can very consistently produce the same result on a given run. That is to say if I take two identical blades (or one blade twice) and do about 10 cuts with each the means will be different by less than 10 % , and generally I like a much bigger difference to describe a functional advantage. The big problem with this method is that there is no way for me a month from now to remember how much force I used back then. However I can get around this by using one of the previous blades, and redoing the work with it along side the new ones.

There is some variation still as the index might vary, percentage index even, because of the force change. How to get around this was pointed out to me by a friend who quite obviously stated that all I need to do is determine how the force effects the cutting ability for various stages of blunting. Once this is done the index can be far more global than it is right now as it can be scaled.

This is something I have planned to do and I am working on some ways to determine the force I am using like measuring the deflection of an elastic object, or the tension in a rope etc. . I could easily do it using some sensors in the lab but ideally I want people to be able to duplicate what I do without the need for special equipment.

-Cliff

[This message has been edited by Cliff Stamp (edited 08-05-2000).]
 
Cliff, sorry, but unless you're RoboCop or something, you cannot maintain that level of consistency by hand. The determination of whether the significance of a given variable is within your study tolerance (significance level, in your way of speaking) is the exact point I was making. You cannot properly determine that on some variables, and on others it is quite clearly well outside the tolerance.

If humans were random creatures, being able ot duplicate results would be an indication that the % error was very low, but we are not. I can easily duplicate results for any number of tasks, but that doesn't mean anything, other than that I can duplicate a pattern (ie, not random) repeatedly. Remember, you have certain expectations, and your subconscious (and even some lower-level systems) is going to do its best to make sure those expectations are met.

==

Cliff, what training do you have in materials science, stress analysis, failure analysis, et cetera? Fatigue is a very complex thing, not just two simple variables. If fatigue was that simple, the aerospace industry wouldn't spend so much to test parts/materials to failure in realistic ways; we would just use extreme tests. The fact is, extreme tests are far from accurate.

jeffj, you're right to be 'curious.' Cliff's methods are fundamentally flawed. The points you noticed are but a few. If you get a chance to take some engineering courses (I glanced at your homepage), you will have many more 'questions' for Cliff.

Sorry to everyone else for the long replies, but it's about time someone burst Cliff's bubble, as he is getting more-and-more cocky as time goes on.

--JB

------------------
e_utopia@hotmail.com
 
Amm I dazzled by bs? I think i might be. Glad I have the chance to try a Talonite blade soon. One thing even if all sides are right on this medel, seems like Talonite would make a great neck knife?
 
I am curious too...just exactly what happens when this bubble bursts??
smile.gif
 
E_utopia :

you cannot maintain that level of consistency by hand.

Fifty percent? With repeated practice you will find that people can go far, far below this just by judgement alone. Ask a very experienced machinist to judge the thickness of a piece of metal or a carpenter to judge the length of a piece of 2x4". The argument is moot anyway as I have done verification trials to determine the mean scatter, it is not something that I take for granted is below the significance level. I have done it so obviously I can.

you have certain expectations, and your subconscious (and even some lower-level systems) is going to do its best to make sure those expectations are met.

Of course, you state these things like they are revelations, they are very basic problems. If I am comparing three knives and how they chop on wood (or whatever) I will at the same time do something else like exercise my dog (watch a tv show, sharpen a knife etc. ) and I record the trials in a random fashion blade by blade (using the position of the second hand on my watch to select the blade) and only calculate the averages when a large volume of work has been done. If I was to work one blade at a time I would fall into a pattern of chops for that blade and possibly introduce a bias into the result skewing the mean towards the first run. At times I will also invite friends over and add their results into the mix where we record our performance seperately and don't discuss them until they are finished.

No I don't describe this in detail in the writeups as for the most part people are simply not interested in it. I have commented however many times that I can be far more detailed if the desire for information is there.


Cliff, what training do you have in materials science, stress analysis, failure analysis, et cetera?

I have read a few materials books, but most of what I know comes from discussing the subject with friends that are working engineers, one of them posts here as Muni. He is a mechanical engineer with 7 years experience in the automotive industry and is a knifemaker as well. He has studed under traditional makers in Malyasia and works mainly in large blades - parangs and such. Usually when a topic gets beyond the elementary I drop him an email and we discuss it, he is an excellent reference as he can cover both the theory and the practical sides and can directly related the principles to blades. A large part of my time is also spent discussing similar things with makers like Phil Wilson who has an engineering background.


Fatigue is a very complex thing, not just two simple variables.

Of course it is, even an elementary materials book will have at least a chapter on fatigue, what did you expect, an inclusive commentary on the whole existing body of work? You could repeat that statement for every single comment made on bladeforums every time someone asks a question and gets an answer.

The fact is, extreme tests are far from accurate.

Of course they are, that is why I stated that they are only to be used as an estimate and they are not a replacement for high cycle low stress work. However they also have a use to determine the expected damage from accidental high stress work.

[referring to Jeff]

you will have many more 'questions' for Cliff.

I hope he asks them, the ones he raised in the above are very critical points.

-Cliff

[This message has been edited by Cliff Stamp (edited 08-05-2000).]
 
e-utopia--

Thank you for putting Cliff through his paces. It's always invigorating to see someone with a technical background and an agenda launch into an attack on Cliff. That's the only way we get an in-depth glimpse of just how thoughtful, creative, and commited to accuracy and fairness Cliff really is, as well as revealing the depth and scope of his technical knowledge and considerable resources.

For me, Cliff's testing is very straight forward and speaks for itself. Every test I've seen him perform and report on has contained information that has added to my understanding of knives, and I've been studying and using blades for over 40 years. What he does for me, every time, is to ask and answer the basic question I have about any knife:

"If I buy this knife to do a given job, based upon its design and intended application, what kind of performance can I expect and how does it compare with other knives designed for that purpose?"

His methods, true, are often outside of the conventional mainstream, and not always fully understood or appreciated. But I've never seen him apply any method for any reason other than to answer that question as fairly as he could. The man has proven tireless in that regard.

If you were to take any individual test and report, from Cliff's reviews, and critiqued his method for what comparative data it revealed about the knives tested, I think you'd quickly find your arguments to be transparent.

But what FUN!!!

--Will
 
e-utopia aka JB

I too appreciate your insightfullness in probing Cliff's methods. But unlike Will I draw a different conclusion. That being that Cliff, as with most of us, is simply posting impressions born out of our approaches to evaluating the utility of a particular knife, steel or alloy. What sets Cliff apart is his use of technical jargon. I find the reviews and impressions posted by members to all contain valuable information. The use of technical references may or may not add to that information after gleaning the manner in which it was produced.

Most of the rest of us cite real world field experience and employ more common terms although their definition (e.g. wire edge) sometimes gives rise to additional dialogue which attempts to further clarify. This isn't really a problem as it facilitates better understanding of another's impressions.

What I've objected to, that has been so eloquently nailed down by Ron Hood, is Cliff's propensity to take out of context, parse, and spin doctor other folks' terms in a seeming attempt to discredit especially if the impressions don't jive with the impressions he has obtained. To agree to disagree is honorable; to rail and impune calls into questions motive.

Nothing hard to understand here.

And since motive and/or agenda keeps being raised, I remain curious why Cliff has continued to avoid addressing a rather disturbing charge Rob Simonich made much earlier in this thread. Interesting.

So, on both counts... the unanswered charge and the apparent need to spin others' views elicits from me "What's wrong with this picture?" A sanity check would be most appropriate.

Bob


[This message has been edited by bald1 (edited 08-05-2000).]
 
Originally posted by bald1:
What sets Cliff apart is his use of technical jargon.


Actually, to me what sets Cliff apart is his thoughtful use of testing methods which are designed to yield accurately measurable and replicable results. His tests can be replicated by anyone who cares to critique his method, because he reports on exactly how the tests were done, the finish and geometry of the blades, and many other critical variables. For example, cutting so many linear centimeters of a common material, measuring the dulling effect at stated intervals, and using methods which are simple and easy to duplicate, so as to arrive at a reliable comparison against a "control" blade. The simple fact is that none of the "experiential field use results" I've seen used to dispute his findings can be replicated in this way.

I see a lot of criticism of Cliff's methods and his conclusions, but one thing I don't see is any of his critics replicating his tests and coming up with significantly different results. To me, this leaves Cliff's conclusions intact. In my view, the rest is just semantics and conversation, heated and entertaining as it may be.

On further thought, there is one other thing that I believe sets Cliff apart from these critics: If anyone WERE to repeat his tests and demonstrate a flaw in his methods or conclusions, he would be the very first to congratulate and thank that person for contributing to his knowledge base.

--Will
 
Cliff as i feel your tests are extreme, and over the top, I would be glad to recieve any knife you wish to send me. I am fair and honest, and would be happy to verify any results of yours for you.
 
Will,

Re-read my post and look at the context rather than emulate Cliff by extracting a single sentence. Yup, Cliff is set apart by his use of the technical. Do I adjudicate that as a negative? Nope. I do suggest that such added review info by anyone may in fact add measurably once the methods or sources are understood.

My post took no issue with his tests per se. Look again as I am concerned simply with two issues. The Simonich charge which remains unanswered, and the propensity to spin others' words. No more, no less.

You speak of repeatability as being key to your acceptance of reviews, especially the data Cliff presents. It sure strikes me as funny that so many, including many with enviable industry reputations, have had similar field or real world experiences with like material or knives. Are these experiences invalid? Or more to the point, invalid because Cliff's testing conclusions are inconsistent with those experiences? From my knothole the prudent man will consider all the reports and not just that from one source.

Anyhow, my initial point still holds. I don't offer any negativism towards Cliff's reports en toto. Frankly I found his chopper stuff quite laudatory. I do not think, nor as he's admitted, that his tests are strictly scientific but rather designed to separate wheat from chaff expeditiously with for him an acceptable degree of error. But, as I've stated before citing my naval nuclear materiel experiences, what the textbooks project, the real scientific lab NDT / MTBF analyses /QA testing reports, and in actual use experience provide can be quite illuminating for they can be at wide variance. It simply doesn't pay to hang ones hat on any one of these alone.

Bob
 
Bob--

I'm not taking issue with anyone's results. I am just trying to learn what I can from the information presented.

My point is that Cliff's results are the only ones that are produced by methods which I can readily evaluate for myself, because he is the only one who has taken the time and expended the effort to accurately define his variables. On the other side of the debate, we have the argument that there is just not enough meat available to do a controlled test of talonite. What kind of data is that? It's just an open-ended opinion. Sure it may be based on experience, but how can we evaluate that?

Now, if Cliff's results weren't available, I'd agree with you. Wouldn't have anything more to go on that the impressions of people who have used the stuff. But, thanks to Cliff, that's just no longer the case.

I don't know that Cliff is right. I just have no way of evaluating the weight to give other arguments, when so many variables are left undescribed. I think one of the chief differences between Cliff's results, and the experiences of others, is the standard which Cliff compares against. I don't know anyone else who is comparing talonite, for example, to a CPM 10V blade which is only .010" thick behind the edge bevel, with an included edge bevel angle of only 30 degrees--or a D2 edge that's been taken down to 20 degrees included angle.

One might say it's unfair to compare against such high-performance blades. For me, I want to know what's best, so I appreciate using a high standard in such testing. But the premise, as expressed so far, that the "field experience" of talonite users carries as much weight as Cliff's well-defined testing, just doesn't hold for me.

Speaking of going back to unanswered questions in this thread, no one has yet addressed my questions about anyone having experience in which Talonite has performed better than any of the CPM steels, other than in the corrosion resistance category.

I'm not trying to belittle anyone's opinion or deride anyone's experience. I'd just really like to know what performs best. But when someone tells me a blade is the best just because it's the best he's ever used, I'm skeptical because I've been there before. Cliff doesn't do that--or if he does, he goes the extra mile. He tells us what was better about the blade in his tests, tells us how he did those tests, and shows us something to compare it to. To help us do that, he carefully defines as many of the variables as he can. Simply.

Until someone does an equal job of clearly defining blade testing and variables, with different results--or shows a better way of doing the tests--as I said, for me the rest is just conversation.
 
Maybe you can maintain 50% consistency, but the scatter will not be as low as you claim, due to it being a non-random phenomenon. I know all about judging distances, and have worked enough with small machine parts to be able to judge distances down in the deca-mil range. Does that mean I can maintain a force that consistently? Of course not. You are comparing purely sensory phenomena to motor-sensory phenomena.

bald1, is this maybe the kind of thing you are talking about? A little bit of spin to make himself sound like he knows what he's talking about, when he really doesn't? Not quite the same as spinning other people's words, but still a type of spin (using actual terms, examples, etc., but applying them in a way they don't actually apply).

Want to do an experiment? Take a digital gram scale. Watching the readout, apply a force of 1 kilogram. Release and repeat several times. Now, turn the readout away from you, and have someone else record the results (without letting you see). Try to apply that same 1 kg force 1000 times, and see what kind of scatter you get, and if there doesn't seem to be a pattern.

re: expectation
They would seem to be revelations to you. You mention nothing of the sort in your reviews, and claim to have a reasonable amount of scientific objectivity. Again, unless you can arrange a double-blind trial (don't see how, since your method still allows you to know which knife you are using, and there really isn't any reasonable way to prevent you from knowing that without otherwise compromising the results), you can't claim that this is a scientific procedure.

Using time as a pseudo-random number might work, if you used something subtle like miliseconds, but gross measurement like fractions of minutes are easily within the level at which you exert a subconscious influence. Anyone who can say 'one-Mississippi, two-Mississippi,' etc. has experienced this natural ability to determine time intervals. Waiting to calculate averages falls into the same pitfall: your brain performs subconscious calculus in order to catch a thrown object; do you really think it has any trouble taking an average on the fly?

If you want to claim scientific objectivity, and still use sigle-blind (or less) methods (and, those really are the only type available for most knife tests, other than comparing completely identical blades made from different steels, but otherwise indistinguishable), you must use a large number of testers (not just a few friends).

re: training:
Reading a few mat. sci. books, and having a few discussions with ME's does not qualify you for failure analysis. It is one of the most intensive and esoteric fields in the whole of engineering. I wouldn't hire an engineer to preform failure analysis without at least a Master's degree or two, and I would probably require that this person has worked for several years in various related fields, unless I was truly impressed with his/her references.

re: complexity:
You did not say that fatigue is a complex issue. You specifically said that it is only two simple variables.

re: extreme tests:
You have also said that extreme tests can replace low-stress testing. Which one is it? And using extreme tests at all requires careful control, as even slight variances can produce widly differing results. For example, a friend was recently working on an RCS thruster for a missile. This one has to be particularly compact, producing upwards of a thousand pounds from a unit approximately the size of a 'D-cell' battery. They predicted a 25% variance between their model of how the thruster would perform, and the actual data from the first test-firing. They achieved a 10% difference, and were nearly in tears. The lesson? Extreme tests are very hard to predict, implement, and draw conclusions from.

bald1, I think we're pretty much on the same page here. While Cliff's tests might be useful 'real-world' tests, he claims that his tests are scientific, and therefore better-than the real-world tests. If he presented them as real-world tests, I would have no problem, but he wants to supplant others' real-world testing with his own, carefully disguised as science. Both as a knife user who values having a wide variety of real-world anecdotes to draw upon, and as a scientist who values data passed off as scientifically accurate to be just that, I find this quite annoying. On the one hand, it seeks to drive out other real-world testers, and on the other, it undermines the validity of true scientific data.

--JB

------------------
e_utopia@hotmail.com
 
JB--

Please don't let me interrupt your debate with Cliff, but if I could ask a simple question:

Can you suggest some concrete ways to improve Cliff's methodology, or share with us a testing procedure of your own design, that will yield more reliably accurate results in measuring the variables Cliff is trying to measure? For instance, his comparison tests between Talonite and any of the other blades he's tested against it.

Thanks--Will
 
JB aka e-utopia,

Your last paragraph is spot on. The issue of supplanting others is at the crux of my lament for the manner in which it is done is rather offensive. I don't like pyscho-babble either
smile.gif
.

Will,

Dunno what to say about your requests for comparisons other than I have blades of many of the metals you've cited as well as numerous others. I've stated repeatedly that Talonite is not a panacea. But one virtue, discounted heavily by Cliff, is the ability to maintain a functional cutting edge for a very long time. Yup, it bests my CPM440V blade in my opinion but then again that blade is a tanto with a one-sided grind. I've said I prefer A2 in my big knives too. I don't think you've read any of the Talonite owners as saying the stuff exceeds everything else at every point of comparison. What it does, and that has been covered over and over, it does very well. I really like A2, BG-42, 0170-6C, the CPM steels, some formulations of damascus, and what the Sami's do with Mercedes Benz car springs! I also like Talonite. O-1, M2, 440C, VG-10, ATS-34, Titanium are all okay in their own right but not at the top of my wish list when other choices are available. Exceptional heat treatment and design implementation will sway me though. This all based upon experiences borne of my personal usage. Now whether this works for you or others is another matter. All I can do is report what floats my boat and why. We call it field usage and it sure as hell isn't scientific.

One someone recommends a car to me it's usually for specific reasons e.g. reliability, economy, handling, etc. Now someone's idea of handling might not jive with my preference for sports car type handling. I seek to determine their frame of reference, then it conveys well and I can integrate their experiences into my "hopper". When those knife industry guys and folks from outfits like Brigade Quartermasters said that their wild boar hunting / butchering knife standard had been an exceptional Blackjack A2 blade and that the Talonite blade they used blew them away and why, I could factor their experiences. Scientific? Nope. Valid? I dare say so. And they're not alone in their comments as has been documented numerous times. To discount all this out of hand as non-repeatable or anecdotal is your prerogative. Everyone should formulate their own methods of acquiring and processing information incident to a purchase. You choose to follow Cliff's lead which is fine with me.

Just count me with those who
-- have found Talonite to offer some virtues we value
-- like JB aka e-utopia who question pretentiousness and disengenuineness
-- and those who know Talonite ain't soft belying the RC numbers a Rockwell tester gives.



------------------
-=[Bob Allman]=-

I did NOT escape from the institution! They gave me a day pass!

BFC member since the very beginning
Member: American Knife & Tool Institute; Varmint Hunters Association;
National Rifle Association; Praire Thunder Inc.; Rapid City Rifle Club;
Spearfish Rifle & Pistol Club; Buck Collectors Club (prime interest: 532s)
Certified Talonite(r) enthusiast!
 
Bob--

First, thanks for the comparison with CPM 440V.

Also, I have no quarrel with field experience--that's what I base all of my final evaluations on. I've just found that my level of expectation and experience is often very different than that of others, when it comes down to what a blade will do. Many of the kinds of things Cliff measures and specifies in his testing are things that have been lacking in anecdotal information I've received in the past. I've found Cliff's methods to be very easy to understand and fair as far as I can tell, and they have been, without exception, consistent with my own field experience. So, yes, I do follow his lead, and gratefully so.

I'd be the last person to tell someone else that their own experience doesn't matter. Of course, it's ALL that matters in the end. I just like to avoid buying every new knife/material that comes onto the market, to see if it's any better than what I'm using already. Cliff's reviews have saved me a lot of money, and have helped me make several buying decisions, the results of which I've been consistently happy with. They've also confirmed some glaring weaknesses I've discovered, after having shelled out significant dollars for blades that didn't stand up to the hype.

If you're so inclined, I really would like to know more about the comparison between your 440V blade and your talonite. Do you know about what the blade bevel angle is on both blades--the 440V and the talonite, and the hardness of the 440V? Thickness of the blades overall and at the top of the edge bevel? Was the steel blade cryo quenched as well as heat treated?

Thanks, Bob
--Will
 
Will, maybe there's hope for you yet! Step back from the dark side
smile.gif


In my most recent post, I did mention a few ways Cliff could improve his methodology, but scientifically testing knives by hand is not an easy proposition.

I can share all sorts of testing procedures for all sorts of different variables. Testing knives scientifically certainly ain't cheap, but it is possible. The first thing that actually got me really interested (as in, more than just saying "I wish that someone would...") in accurate scientific testing of knives was the reported differences between the way Talonite wears and the way steel wears.

To keep this simple, I'm just going to look at a rig for testing wear during push-cuts in paper. Any other cutting method can also be simulated, but the designs get more and more complex, making it harder to discuss in the 'forum' format.

The knife is clamped to a pivot, probably at the blade/handle junction, although the placement is not too important. The bottom of the handle rests against a force sensor (I prefer electronic, but if you want to be quaint, you can go with a torsion bar, needle, and spool of paper to produce those seismograph-style charts). The a spool of paper is turned by a motor such that the blade slices the paper lengthwise. A counter wheel measures linear feet of paper that is cut (and if you want to get fancy, you can use the counter as a tachometer and connect it to a motor-controller to maintain a constant linear velocity as the diameter of the drive spool changes). As the blade cuts the paper, it exerts a force on the force sensor, which is recorded (either by computer on paper graph).

The results for a Talonite blade should be that the force sarts low, jumps up, then levels off. For a steel blade with similar geometry, the force should start at the same point, then gradually increase, passing the Talonite blade at some point, but that point should be at a force level where the blade would still be considered 'sharp.' Further tests of interest include different paper weights, mounting the pivot such that the cutting action is slightly across the paper, but still constraining the motion of the blade such that it stays centered and using heavier guage paper to test rolling, mounting the pivot assembly to a device which raises and lowers it in a sawing motion for simulation of slicing cuts, among others.

Rigs can also be made to test chopping, although to simulate chopping by a human, pretty significant vibration dampening is needed to simulate the dampening abilities of the human musculature. If you can think of a use some knife is put to, a device can be thought of to test its abilities in that use. Is it cheap)? No, but that's science. What Cliff does is not. If he stopped pretending to be a scientist, and acting like he's superior to others, I would have no problem with him and his real-world reviews.

Does that satisfy you as to what is science and what is 'real-world'? Even if not, it was fun, as that was the first time I tried to explain that apparatus in writing.

--JB

------------------
e_utopia@hotmail.com
 
E_utopia :

Maybe you can maintain 50% consistency, but the scatter will not be as low as you claim, due to it being a non-random phenomenon.

I have done it so obviously it is. Your argument, against this is that I am actually recreating the results of past work, or forcing trials to show a set performance by being able to do fairly complicated mental calculations. If this was possible I would agree with you. However I don't think that I could be doing the kinds of calculations that are necessary.

If I am comparing two knives then yes. In fact I have seen this happen. This is why I started doing trials with larger number of knives and not looking at the results in detail as I was recording the data. I simply don't think that I can be calculating averages on the run of 8-12 blades and using this to force results. Now I give it to you that it is quite possible that I may be able to subconsciously remember these results once they have been calculated and could indeed be forcing future work to agree with them - however I don't believe that is the case because many times I have encountered contradictions which when studied reveal new blade effects that I was unaware of previously.

In general though. this is of course a very important consideration and something that I take many steps to prevent against. For example when Will sent me the two Boye hunters he mentioned that Phil Wilson had used Boye dendritic materials and that I should drop him an email. I replied that I would do that only after I did an evaluation myself. I don't think that I would force what I did to agree with Phil's description, but I don't see any reason to take the chance on it. Similarily the two Boye blades were unmarked (Cobalt/Steel) and I never looked up which one was which until I obtained the relative performance. Once again I don't think that if I knew which one was Cobalt and which one was steel I would have threw the results, but I didn't see any reason to add the chance when it was easily avoided.

Same kind of things goes for the seconds random number generator. I read about this many years ago in some computer science course. I have verified that this does work by looking at the patterns produced by it. I have found that it does not take much distraction at all for me to be unable to force a pattern to the results unless of course you are talking about very small time scale checks which obviously is not the case here.

Concerning double blind studies, again this is something that is an ongoing part of the evolution of the reviews. There have been many people involved with such a project. The goal of which is to be able to get a random selection of blades not from the maker but from sellers and send them out to a number of people who do the reviews independently. If the maker support was established then materials could also be looked at in this manner. And of course having everything unmarked would help by reducing the chance of forcing presuppositions. I had this conversation about 2 years ago with Drew Wilson who was having 5 identical blade geometries looked at by a group of testers.

I realize such problems exist and when I can I will do what is possible to eliminate them or at the very least make them as small as possible. However there are lots of fields where such studies are not required. They are only strictly needed when the potentional for human direction is the data is strong enough to skew the results. You obviously think they are in knife reviews for the reasons you describe, I do not when steps are taken so as to not make the effects overly likely.

As for fatigue, I was commenting on a specific reference you made. What I said is relevant to that as is the specific example I gave which refuted your claim. Yes of course there is more to the subject that that but that is an elementary obversation that can be applied to anything.

If I have said that extreme testing can replace low stress high work then it was either a typo or at a time when I believed it. I don't recall such a time though and none of the recent reviews reflect it. All of them describe how that is related to the actual use a knife is to be used for, which is always done in additional to the controlled work.

As for your cutting test, that is exactly what Spyderco does, and in fact something that I have experimented with in the past. I never used it seriously as I think it skews the results by removing the strength and toughness requirements from edge holding which are crucial when the blade is going to be used by a person and not a machine. I do however think that obviously the information is useful, especially in the limiting case (this is the best edge retention that you could hope to achieve by hand with perfect control sort of thing) and in the future it is something that I will probably add.

-Cliff


[This message has been edited by Cliff Stamp (edited 08-06-2000).]
 
is'nt this thread about talonite and not about knife test technics or about cliff?

------------------
Ray
MesserForum.net
 
JB--

Your machine sounds like it would be fun, although Cliff's remarks about that approach do raise some good questions. I'd certainly like to see the results of testing with it, though. You seem to be convinced the results would vary significantly from Cliff's, and I hope you go ahead with your experiment.

But JB, are you actually saying, straight-faced, that a man cutting cardboard with two knives, until one of them gets too dull to cut, is not a valid measure of which blade holds an edge better? Or is it just that the method isn't complicated enough to fall into your definition of "science"?

--Will
 
Status
Not open for further replies.
Back
Top