Regarding IDIOT tester.

Status
Not open for further replies.
The cinder-block-buster-o-matic consists of a rigid pole/lever upon one end the knife to be tested is attached.
Ok, it's pretty much the same idea as with Mythbuster's chopping rig, which you yourself agree isn't the epitome. Anyway, ok, we have the rig and it will provide the data exactly how many chops would it take to go through a cinder block, what force, initial edge, etc.

The point of using a machine is to remove human error from the equation.
Removing human factor or error, when I am looking for the properties of the steel or handle material or cinder block in general is fine, nothing against that. However, that data is readily available for every steel out there. Toughness and wear resistance charts, whatever else, it's standard and well know science and methodology is there. No need to come up with our own rig for that data.

My problem with this approach is that in everyday life knife is never used in the rig setup, and human factor or error is constantly there, that is how knives are used, with those errors and removing that from the equation won't necessarily give more accurate results.

Honestly, what do you think if you were to chop through the wooden block and we'd record angle, velocity and force of each swing or chop, what would it look like on the chart, closer to Noss results or closer to machine result?

The point is that with appropriately bounded data, one could say that x knife is tougher than y knife regardless of environment (field, shop, or lab).
That is based on the assumption that the knife in question is used EQUALLY in all the envs you listed. Unfortunately humans can not use the same knife equally even in the same env.

Let's look at another rig. Which would be a metal cutting machine in any factory. Which holds extremely hard tool steel blade and cuts n feet of metal before needs replacement. That is similar to your test rig right? Should work fine to get an idea of metal's wear resistance at that hardness. If we follow your logic this test is perfect because human error is removed, we have bound data, no variables or deviations, controlled env.
Given that setup you could conclude that the best hardness for XY steel steel is around 67-68 HRC or even grater, I don't know what's the max for the industrial applications.
However, if someone made a knife for you with those parameters, how long do you think it would last in your hands before chipping or breaking?
So, I don't think completely removing errors and deviations from knife testing is giving more accurate results.

You don’t have to be an expert in a particular field to recognize flaws in testing methodology, or the near absence of methodology all together.
Yes and no, in practice not being an expert in particular area is also a reason why your expertise, critics and such won't be always correct and will have less credibility than that of an expert. I don't imply Noss is scientist or an expert. This is about you and your input on the knfie testing subj. Which also means, me not being an expert in this field can be wrong too on test methodology.
 
Why do I feel like I'm in a global warming thread? :p :D

Bring on the charts!!!
 
It could go a ways to answer the question of "How many times could the same guy get a poorly heat treated blade from the same company?" Also known as "Random bad heat treat versus evil serrations." Uniformity is still the name of the game.

A group of individuals (perhaps 10 to 20?) doing destruction testing on knives from the same maker would, presumably, not purchase them all at the same time. Some of the knives would be relatively new and others might be donated long after manufacture. Wouldn't a maker of quality cutlery be careful to let as few poorly treated knives out of the factory as possible?
 
My head feels better, so off to bump it again.

Removing human factor or error, when I am looking for the properties of the steel or handle material or cinder block in general is fine, nothing against that. However, that data is readily available for every steel out there. Toughness and wear resistance charts, whatever else, it's standard and well know science and methodology is there. No need to come up with our own rig for that data.

I think it could be enlightening to take a mfg's steel, with their heat treat, machine it into some std charpy specimens, and test it. Then compare it to industry figures for toughness of that steel (there are different toughness tests, further complicating it). It could point put if you have steel quality issues with your steelmaker. I'd also be interested to see what the critical temperatures are of some alloys to indicate their suitability for use in cold temps (though you may get a good idea with just some research). Could be important if you're an ice road trucker!


My problem with this approach is that in everyday life knife is never used in the rig setup, and human factor or error is constantly there, that is how knives are used

No tests used by any engineers in academia or industry intentionally add some uncertainty to the test parameters to account for human uncertainty in attempt to predict how the machine/tool/material and human will behave. The idea is akin putting humans in the crash test cars, and letting them hit the wall at varying speeds, with varying points of impact, at varying angles to determine which car is safest in an impact.

Some tests simulate human use by introducing some factor into the test, but they will always do it in a very repeatable fashion.

But you can get objective results for what the car (or knife) will do (cut) with a known force and a known (slice) speed if you use proper scientific method. That is how they do auto crash tests.

that is how knives are used, with those errors and removing that from the equation won't necessarily give more accurate results.

That is not true. You will always make any test less repeatable by adding more variance in the test parameters. It's just math! ;) And by reducing the uncertainty of your test parameters, you are always guaranteed to get more repeatable results, and make a more reliable conclusion. Always.

And I think Hardheart had a very good view of nossy's videos. :)
 
I, personally, find Noss's testings to be interesting. Not scientific, but interesting. You guys come in here arguing whether or not his tests are "scientific." Even he has mentioned that his tests aren't scientific! Who really cares? I don't think his tests are meant for comparison purposes as well. When I see his videos, I think "interesting, that knife can stand up to more abuse than I originally thought," not "this knife performed (insert scientific quantitative data here)." His tests reminds me of how programmers test their programs: "if you can't break it, it's good."

I've never seen such disrespect coming from this forum. If you guys wanna criticize him, at least be respectful, i.e. constructive criticism. Please, stop arguing about whether or not his tests are valid, scientific, etc.
 
Personally I have a real problem with Noss. Before I watched his test of the FFBM I owned no, zero Busse knives. Two months after watching the test I own 16 Busse's and it is all Noss's fault! Oh! The humanity!!! and what he has done to my bank account!
 
I think it could be enlightening to take a mfg's steel, with their heat treat, machine it into some std charpy specimens, and test it. Then compare it to industry figures for toughness of that steel (there are different toughness tests, further complicating it).
I agree on that. Several custom knife makers mentioned that manufacturer's data wasn't applicable or the best choice.

The idea is akin putting humans in the crash test cars, and letting them hit the wall at varying speeds, with varying points of impact, at varying angles to determine which car is safest in an impact.
I disagree. Any driver can drive a car at a given speed and can do it consistently, over and over again. With knives, I doubt there is someone who can do cutting or chopping with the same precision.

But you can get objective results for what the car (or knife) will do (cut) with a known force and a known (slice) speed if you use proper scientific method. That is how they do auto crash tests.
Dunno. Again, car testing as it's done is easily repeatable because car is mechanically operated. Knives are not.

That is not true. You will always make any test less repeatable by adding more variance in the test parameters. It's just math! ;)
I don't argue about math and the fact that variance will reduce test repeatability. However, once you get out of the lab and controlled conditions situation is changed.
Take your own example about steel data. P. Wilson commented that he feels the data provided was obtained in a lab(by manufacturer) and because of that it doesn't work too well in real life. He had to do his own testing and experimenting to get optimal results for his purposes, which is knife making.
 
A group of individuals (perhaps 10 to 20?) doing destruction testing on knives from the same maker would, presumably, not purchase them all at the same time. Some of the knives would be relatively new and others might be donated long after manufacture. Wouldn't a maker of quality cutlery be careful to let as few poorly treated knives out of the factory as possible?

I was alluding to a recent set of videos by Noss where he broke two knives by a well known maker. Some argued that it was a sampling issue ie, a bad one gets out every now and again. My question is what are the odds that two knives- different models, from the same company, bought at different times, by one person- would both be outliers of their respective populations? Others argued that it was a design issue, as both broke at the same place. However, all we really have is two broken knives, and no real quantitative data from which to draw conclusions. "Wow, that knife broke way before I thought it would" is not sufficient for comparison.

Honestly, what do you think if you were to chop through the wooden block and we'd record angle, velocity and force of each swing or chop, what would it look like on the chart, closer to Noss results or closer to machine result?

Eventually I will learn not to get sucked into these debates. The point is, and always has been, that Noss' videos provide no quanitative basis for comparison between knives.

Yes and no, in practice not being an expert in particular area is also a reason why your expertise, critics and such won't be always correct and will have less credibility than that of an expert. I don't imply Noss is scientist or an expert. This is about you and your input on the knfie testing subj. Which also means, me not being an expert in this field can be wrong too on test methodology.

Imperfect knowledge is by definition imperfect. However, a basic understanding of the subject matter gives one standing, and training/expertise in a related field gives one credibility. I was unaware that this was about me, however, I have already stated that I design and implement scientific assays for a living. Perhaps you do not see the connection between science, the scientific method, tests, reproducibility and a scientist. What is it that you bring to the discussion?

So, I don't think completely removing errors and deviations from knife testing is giving more accurate results.

This may become my new sig line.
 
I just think that this should be recorded for posterity.
It is recorded already, don't worry.
Anyway, let me rephrase, your lab tests will be less accurate because of the variance removed when we talk about matching human performance. I think U understood what I meant though.
I also believe that should we record all data for you, average human and your test rig for let's say 100 chops, your and average human results will be a lot more similar then yours and your rig's. I assume you disagree on that too.
 
Ok I propose this as a compromise: Noss stops using decimals (4.5 for the CTD, and 3.75 for the KBD2) in his blade rating scale to account for lack of testing precision. Come on Noss those decimals are pretty hilarious :D
 
.
I also believe that should we record all data for you, average human and your test rig for let's say 100 chops, your and average human results will be a lot more similar then yours and your rig's. I assume you disagree on that too.

Wow.

That's like saying two different car crash that occured at the same speed are more alike then scientific test.....

Which is okay, but I think most tests would be designed to replicate one or the other.
 
Personally I have a real problem with Noss. Before I watched his test of the FFBM I owned no, zero Busse knives. Two months after watching the test I own 16 Busse's and it is all Noss's fault! Oh! The humanity!!! and what he has done to my bank account!

Well, I save about $300 not buying knife which did not stand up as it was expected. I suggest you to limit yourself to 1 knife a month...

But can you imagine if you already spend quite a bit of money on some knives and because on Noss it turns out you should not? But you already spend... Pretty hard situation and all because of Noss... of course whom else to blame?

Thanks, Vassili.
 
Well, I save about $300 not buying knife which did not stand up as it was expected.

So the single, guiding principle for buying a knife should be that it stand up to cinder blocks and hammer blows. I see. All this time, I thought knives were tools used primarily for cutting or chopping softer media. Shame on me. :D
 
I also believe that should we record all data for you, average human and your test rig for let's say 100 chops, your and average human results will be a lot more similar then yours and your rig's. I assume you disagree on that too.

Wow.

That's like saying two different car crash that occured at the same speed are more alike then scientific test.....

Which is okay, but I think most tests would be designed to replicate one or the other.

Darthsoaker, Gator97 is still arguing that because a human cannot replicate a machine, the tests by the machine are not scientifically valid.

I'm not even going to rebutt this anymore. Those that care can look up posts earlier in this thread by Broos, me, and others for reubttal.
 
if the machine does not induce the lateral stresses and random impacts in chopping, then it probably isn't doing a proper durability test of the knife - heat treat, steel, grind, profile. Rigid tests aren't less accurate, they're a less accurate representation of use. One of the benefits of machine precision and consistency is that you don't have to overbuild to the degree you do with a nebulous, ham-fisted 'average' human user.

car testing - cars aren't meant to be run into things, so why do we test them? Now, why would it be a good idea to test a knife against metal & concrete impacts, prying, and do it in human hands, under varying circumstances? A baseline is great, but someone else is going to have to provide that, just like the IIHS lets somebody else give horsepower ratings and towing capacities. I'll bring the popcorn.
 
Gator97, I saw in another thread that you are a software engineer. I think that I have found the source of conflict. We are actually speaking different languages. As a software engineer, presumably your professional life revolves around human variability. Lets face it, people do some stupid stuff with their computers, and you have to contend with as many hardware/software combinations as there are people, and your code has to be solid. The only requirements are that it does what it is supposed to do, and be stable. The cause of instability? Human variability. If everyone had the same software and hardware and ran their computers the same way all the time, stabiliity would become, presumably, a non issue.

To you, knife toughness is chiefly a human issue. How a human abuses the knife has a direct correlation to the observed toughness in this world. The question that you ask is, can a human break this blade? If it breaks once, the answer is yes.

On the other hand, I am more or less a materials scientist. As an enzymologist, I am concerned with the intrinsic properties of the enzymes that I study. These are properties that are independent of the person doing the assay, and if the assay is designed correctly, independent of the assay. We compare enzymes against each other in order to pick the ones that have the properties that we want. I'm not talking about 2, 10, or even 20. Litertally tens of thousands of enzymes need to be accurately and efficiently assayed for several parameters of interest. Human variability, and assay variability can hide the properties that we are trying to study.

I view toughness as an intrinsic property of the blade. Handle material, fasteners, blade shape, profile, height, width, primary bevel angle, edge grind and angle, spine thickness, distal taper, blade steel, and heat treat all play a role. However, at its core, toughness is a property of the knife. Just like with the enzymes, human variability and test variability hide the property under study. In order to compare one knife against another, human and test variability must be removed or minimized, or the comparison is meaningless. Likewise, replicates should be done in order to assay for variability in the sample set. Was the first result an aberation? Do it again and find out.

Once is an accident. Twice is a coincidence. Three times is the beginning of a trend.
 
Status
Not open for further replies.
Back
Top