Which folder is the strongest, period?

Bob Taylor,

I understand what you are saying, but if I read you right, then the inventor of the lockbacks, midlocks, liner locks, or for that fact any type of lock, then there should only be one knife per each and one maker of such! This tells me that almost all knives are copied somewhat, with or without royalties.

If that were the case, then there wouldn't be too many knives out there at that point.
The knife industry would be kind of dead.

If a maker doesn't want to or can't patent his design, then it should be open and fair game. If it was Chris Reeve who actually was the first to design it and didn't want to patent it for what ever reasons, then others should and can expand and improve the design such as what MISSION KNIVES did! I as a knife collector and user only want the best and Mission Knives did improve the handle which makes it a better and stronger folder. The Sebenza is just a little too small for my tastes, but their BG-42 steel they use is the best for a blade.

I must say though that I don't agree with others that steal or copy a knife that has been patented by another with out paying that inventor royalties.

Just my opinion as is yours!

Thanks Bob, I do value your opinion, just as I do with all others, what better way to learn from!

Sorry that this thread got a little off subject here!

Mark

" Knife Collectors Are Sharp People
smile.gif
"



[This message has been edited by Mark W Douglas (edited 08 April 1999).]
 
Many of those locks you mentioned did have a patent on them. In the old days a patent was good for 17 years. Currently they are 20 years from date of application. Bob Brothers and I have another Lock in the process "The Stud Lock" (isn't that macho sounding) We put the Wakatan in a drawer till we get the Patent issued. Why? should we make it while Patent pending and fail to get a patent then everyone in the industry gets a free lock. Should the Patent office decide it's not going to issue a patent then the Prototype will fall into Bob Brothers forge and the paper shreader will have a good meal. Im sure they would pay us for the effort R&D and idea. You see anyone sending Chris Reeves money or giving him credit. While I belive helping the industry isn't a bad thing I also belive in captialism. The reason this country is what it is today is the American Peoples industralism individualism and hard work and yes good old captialism.

Bob Taylor
 
Bob:
The information that is contained in your patent application will become public knowledge if and when the patent is granted, denied or dropped from prosecution by the applicant. Your only truly protected during the prosecution because the patent claims are held in confidence during this process.
This is my experience from the prosecution of the patent on the lock system for the Uluchet. This may have changed, so you should check with your attorney to avoid any surprises.

------------------
P.J.
YES,it is sharp, just keep your fingers out of the way!
www.silverstar.com/turnermfg


 
Ohhh, I thought there was a double post with this thread and one had vanished, but I just discovered that it didn't vanish but was on the KnifeForums.com site also. Da ... I knew I posted my response to this thread already! It is hell getting older sometimes
frown.gif


Mark


 
If autos. are allowed here...I'd have to say that the 9100 Auto Stryker is a hell of a strong knife. Blade and all!!! -AR
 
Bob Taylor you state :

Un scientific testing is not impressive.

I would agree with that.

However the post by Mission (concerning the MPF) that you are responding to:

It has been tested by Kim Breed of Blade magazine by first ramming it into a tree, then Kim did 12 pull-ups with it - tested the blade and lock strength.

describes a scientific test. The knife was exposed to a stress and then examined for any resulting damage. The stress was described in some detail (Kim's weight and the way in which he applied it), and the resulting examination was also specifically described.

I would agree that the test would have been more informative if the exact position of his grip on the handle was stated as well as how far it was stuck in the tree (mainly so it could be exactly duplicated) and it was repeated with another knife - but that does not make it unscientific.

That test provides a data point. More points make the picture clearer, but even one data point collected without strong bias is valuable information and just as scientific as anything else.

As for testing requirements, machines do not make science - people do. Albert Einstein often used "thought experiments" to examine physical theories while never getting out from behind his desk. I would be careful about calling his experiments unscientific.

In any case, as a general rule of scientific study, you don't engage in precision measurement beyond the point of actual field data as you are then just collecting valueless numbers.

Point in fact related to knives, if I wanted to test say edge retention on two models, I would cut say a 3 or 4 hundred feet of cardboard with each and examine the edges after all the slicing. Now you might say there are more precise ways to grade edge retention, and abrasion resistance in particular. And I would agree. However making reference to the rule stated above, if I can't see a difference in edge degration in normal use, then for all *practical* purposes there is none. The fact that some machine can see a difference is of no value as its not a machine that is going to be using it.

Anyway, as for the strongest folder. If you extend it to folding hatchets the Uluchet comes out over any folding knife I have seen. I have chopped heavy with it for a number of weeks, pryed with it enough to flex the handle about 45 degrees, and even thrown it around. It shows no signs of wear.

I did finally managed to put a small dent in the blade though when I was chopping through a piece of board. The uluchet brought up solid so I figured it hit a knot. I then gave it a good smack but still no go. Turns out there was a nail in the wood. The dent is visable but not large enough to effect performance.

-Cliff
 
Cliff Stamp
I think you missed my point. I wasn’t challenging Missions Claim but offering them an opportunity to see what and where they stand. I’m sure that the MPK is more than sufficient and they make a great product. Andy Stamford put a Pocket Hobbit in a Block wall and stood on it then kicked the handle. This was in an article in Tactical Knives. Kicking the knife also applied kinetic energy to the lock. While it makes great reading and is a basic test (writers usually don’t have a Laboratory to test in) I am not jumping up and down and touting it as a test. To have the Spyderco engineering people do a constant calibrated test that can be duplicated is what scientific testing is all about. There is no way the same man can duplicate the same test by Kim Breed’s method. The angle of the knife. placement of his hand and how his weight was dispersed etc.

You stated

As for testing requirements, machines do not make science - people do.
People make the machines so they can achieve a Scientific Result.

Your edge retention test idea holds no basis of validity unless you can answer the following questions
1 The amount of pressure applied to the blade
2. The Stroke of the cut
3. The consistency of the cardboard of media being cut
I guarantee you that each inch of common cardboard is different than the rest and no human can be close to applying the same pressure and stroke. I would bet the variance would be in the 20% range for all three variables.

While at Spyderco's lab I watched their testing of blades for edge rentenion. Their machine applies a measured pressure with a measured stroke I believe Pat Kelly said plus or minus 1%. The Media is a special cardboard that cost $150.00 per pound because it is consistent.
Simple math would be that Spyderco could be within at best 1-6% of actual performance. Your testing would be plus or minus 8000%. I think I will stay with Scientific Testing

The next question is why don’t they call Sal and have a couple of their knives tested. Pat (the knife killer) Kelly loves to break things. Mike Turber will tell you that Sal will test anything.

Bob Taylor
 
Bob. First off uncertainties or variances do not behave like 20% x 20 % x 20%. = 8000 %. In the situation you describe what would happen is that the total variance would be SQRT( 20%^2 + 20%^2 + 20%^2) which gives a total variance of about 35 %. Now of course the mean or average effect, which is what you see, will be much more precise than that. It will be close to 35% /sqrt(number of cuts). For the test I quoted this comes out to about 1.7 % .

What does all that mumbo jumbo mean? Basically if I do the cutting test I describe and look for a result my margin of error is about 2 % - or quite simply - unless the difference in edge holding is greater than 2 % I will not see it. This is perfectly acceptable to me which is why the test is scientific and more importantly why it would be considered numerically efficient. What is the value of testing to say .1 % (which I could easily do if I simply cut more cardboard). Of what value is an increase in edge holding of that minute capability. If I were to actually do this test I would probably set my tolerances a little higher and test with a margin of error about 5%. I would reach this with about one hundred 1 foot cuts with each knife. This makes sense to me as I don't consider an increase of less than 5% to be significant. And in any case my estimation of the edge quality is the real significant source of uncertainty not the means by which it is wore down. This is why I would aim to have the source of uncertainty in the wearing process by a factor of ten less than my ability to estimate edge loss so I could then ignore it (which is why 5% is reasonable for the cardboard wear tolerance).

Whenever I do a review and post any numbers or make any kind of statement about edge holding or penetration abilities etc. this type of analysis has been done. I rarely post it but it is available to anyone who wants it. For example if I say two knives rated 1.5 and 1.9 on a penetration test. The actual test results are more like 1.45 +/- .05 and 1.85 +/- .05. The .05's are the 95% confidence intervals, or basically the uncertainty or variances in the main results of 1.45 and 1.85. These are calculated by the standard method of propogation of standard errors throughout any and all calculations. I generally don't quote all of that as it makes the tables more cluttered and I don't think such detail is necessary. I stick with the basic scientific convention of rounding off with the variance or uncertainy being in the last digit. Since I am not submitting any of this analysis for publication I generally go a bit cruder and round one more digit off.

Being scientific is not using machines, or working within a certain precision. Being scientific and numerically sound means first off that you know where to set your tolerances to, second it means knowing how to obtain them, and third its how you interpret data - even if its not collected in the best manner as in reality generally lots of the data you have to deal with will not be collected by you. Lots of important scientific work has been done in the past, and will be done in the future with only one important tool and that is the human brain.

As for the validity or scientific nature or my testing methods, and that test I described above to be specific, I have not to date recieved any founded complaint - well that is a lie - there was one. I had one report rejected because the spelling was so bad he refused to read it. This of course is no odds to me, its not my fault the Englich language has no sensible rules.

-Cliff

[This message has been edited by Cliff Stamp (edited 13 April 1999).]
 
Back
Top