How can we have reproducibility without data?

Joined
Apr 17, 2010
Messages
1,343
Some very intelligent Bladeforums members have done, and continue to do, some very interesting experiments on edge retention and other qualities of blades. Unfortunately, some of them seem to think that revealing their data will confuse us, and choose only to release their conclusions without revealing the foundational basis for those conclusions.

My training is in mathematics and experimental physics and chemistry. therefore, to me, seeing conclusions from someone who refuses to release experimental data leads to some unfortunate impressions.

I do not think there is any bad faith involved with these people, but I do have some definite concerns regarding reproducibility. If someone else replicates any of these tests, how can we be sure if the results actually agree, if there is no way to perform comparative tests of the data sets?
 
They may not have the data, as in not taking all the measurements of the blade dimensions, hardness, heat treat protocol, etc.

In my case, when I said I still have a few weeks left... I still have a few weeks left. Broadcasting info piecemeal when not even I have all the data will certainly lead to conclusions being drawn prematurely and then most likely contradicted when I get to peel back another layer. I am not going to lay out the procedure without enough measurements, and I am not going to lay out the measurements until I have all that I am looking for.

I was hesitant in giving a couple of initial thoughts and interpretations, and probably should have followed my gut. I very much want the procedure and results reviewed and critiqued, but I am first going to finish this contracted job. I am going to pay my bills first and satisfy my curiosity after. I don't want to discuss things from a point of ignorance either, hence the delay.
 
I know you are doing the right thing, Hardheart. I am eagerly awaiting your report, and I can wait as long as it takes for you to get it done to your satisfaction.

That being the case, I was motivated to start this thread by statements made by some other people, not you. Other people who I should mention I also respect, and admire the work and dedication of. :D

Oh, in the interest of full disclosure, my selfish reason to want more data is that I enjoy mathematical modeling. I like figuring out mathematical relations and approximate formulas that apply to my hobbies and interests.
 
The way some of us look at it is we provide an overall view of what that data says and we lay it all out for FREE.

That's our time and money spent that WE spent will little or almost no help.

So in the end that raw data is ours and is valuable.

We do however lay out our methods so they can be repeated if people want to make that kind of monetary and time commitment.

Most of us however never saw people lined up to offer donations as in MONEY to cover the expenses for the testing media and knives and believe me it's a substantial amount in my case, but they want us to just fork out our data for FREE?

It will never happen in this lifetime I can tell you that...... ;)

So if there are some who don't like that then that's just tough..... And they will just have to get over it or jump off a cliff or something....
 
Last edited:
The way some of us look at it is we provide an overall view of what that data says and we lay it all out for FREE.

That's our time and money spent that WE spent will little or almost no help.

So in the end that raw data is ours and is valuable.

We do however lay out our methods so they can be repeated if people want to make that kind of monetary and time commitment.

Most of us however never saw people lined up to offer donations as in MONEY to cover the expenses for the testing media and knives and believe me it's a substantial amount in my case, but they want us to just fork out our data for FREE?

It will never happen in this lifetime I can tell you that...... ;)

So if there are some who don't like that then that's just tough..... And they will just have to get over it or jump off a cliff or something....

Okay, so those people see themselves as working under the corporate research model rather than as scientists. Since they put a monetary value on the release of their data, do you think there is a reasonable standard for pricing and selling said data? For example, would cost of materials plus an hourly wage for the time investment be a good way to develop a standard value, or would some other standard be necessary? Would every buyer of data have to pay that much, or should it be priced more like software, where one puts a reasonable market price on the data and hopes that enough copies sell to cover development costs and turn a profit?

Alternatively, what if some of our testers did something like starting a Kickstarter drive for the full funding of a piece of research. If the costs of the testing get covered, then the data gets released publicly.

In the specific case of your edge retention thread, Ankerson, do you have a preferred brand of manila rope? If I send a knife to you to test, would it be appreciated if I pack it along with more than enough 5/8" rope to test it with?

Of course, having been trained in academia, what I'd like is for several of us to replicate your testing setup, and for us to be able to say, "okay, with 4 independent tests, this knife with 30 degree inclusive edge bevel made 68, 70, 75, and 57 cuts before reaching the load limit on the scale, so it belongs in category X, with Y% certainty." I do acknowledge the investment of time and money. To me it makes sense to look for ways to address that aspect of the endeavor.

And of course, I appreciate the work done by you and others in this forum. I respect you and anyone who puts the time and effort into doing meaningful and consistent research in a field they care about.
 
Okay, so those people see themselves as working under the corporate research model rather than as scientists. Since they put a monetary value on the release of their data, do you think there is a reasonable standard for pricing and selling said data? For example, would cost of materials plus an hourly wage for the time investment be a good way to develop a standard value, or would some other standard be necessary? Would every buyer of data have to pay that much, or should it be priced more like software, where one puts a reasonable market price on the data and hopes that enough copies sell to cover development costs and turn a profit?

Alternatively, what if some of our testers did something like starting a Kickstarter drive for the full funding of a piece of research. If the costs of the testing get covered, then the data gets released publicly.

In the specific case of your edge retention thread, Ankerson, do you have a preferred brand of manila rope? If I send a knife to you to test, would it be appreciated if I pack it along with more than enough 5/8" rope to test it with?

Of course, having been trained in academia, what I'd like is for several of us to replicate your testing setup, and for us to be able to say, "okay, with 4 independent tests, this knife with 30 degree inclusive edge bevel made 68, 70, 75, and 57 cuts before reaching the load limit on the scale, so it belongs in category X, with Y% certainty." I do acknowledge the investment of time and money. To me it makes sense to look for ways to address that aspect of the endeavor.

And of course, I appreciate the work done by you and others in this forum. I respect you and anyone who puts the time and effort into doing meaningful and consistent research in a field they care about.


If you factor in all the time and testing materials involved in my testing that number would be bigger than you would think... ;)
 
My edge retention testing does not generate numbers. It is based on visual comparison of blades after cutting. It was the only way for me to reduce the number of variables which affect cutting performance. I use a hand lens for the observations. Lacking equipment for photomicrography, I fear you will just have to believe that I observed the differences correctly. I do report the Rockwell hardness of each blade I test. However, my techniques require little equipment. You are welcome to repeat them.

(I find it gratifying that reports of performance from folks who are just regular users do tend to corroborate my findings. My own reports don't pretend to generate groundbreaking results, they just clarify the turbid waters a bit.)

My profession has long been materials development as a chemist and materials engineer. Sometimes in the lab. Sometimes in the field. My experience in evaluating real world results has been that sometimes the best you can do with the budget available is say, "this one is better than that one." Sometimes, that knowledge is good enough, so that you don't have to spend 1000X the budget to get an exact, numerical answer.

With a large enough budget, one could have bought all of the Spyderco Mules. They were all the same geometry. With one exception, they all had good, perhaps optimal, heat treat. With a set of such identical blades, and enough budget, one could send the blades to CATRA and have them all tested.

Bohler did that. Identical blades. Different only in the alloy. Jim posted their results. Good stuff, that. But, that takes a budget which none of us have. And anything short of that is going to generate bogus data if you drive to get too precise an answer.

Old aphorism:
It is better to have a rough approximation and know the answer ± 10%, than to demand an exact answer and not know the answer at all.

I think those of us who perform testing would be more enthused with your posts if you did some testing your self, first, so that you have some idea of the effort involved.
 
I'm curious what intrinsic value the raw data has? If a person doesn't want to share, that's fair, the information belongs to the person that generated it. However, not sharing does reduce the value of the test results that are shared. Peer review is a good thing.
 
Okay, so those people see themselves as working under the corporate research model rather than as scientists. Since they put a monetary value on the release of their data, do you think there is a reasonable standard for pricing and selling said data? For example, would cost of materials plus an hourly wage for the time investment be a good way to develop a standard value, or would some other standard be necessary? Would every buyer of data have to pay that much, or should it be priced more like software, where one puts a reasonable market price on the data and hopes that enough copies sell to cover development costs and turn a profit?

Alternatively, what if some of our testers did something like starting a Kickstarter drive for the full funding of a piece of research. If the costs of the testing get covered, then the data gets released publicly.

In the specific case of your edge retention thread, Ankerson, do you have a preferred brand of manila rope? If I send a knife to you to test, would it be appreciated if I pack it along with more than enough 5/8" rope to test it with?

Of course, having been trained in academia, what I'd like is for several of us to replicate your testing setup, and for us to be able to say, "okay, with 4 independent tests, this knife with 30 degree inclusive edge bevel made 68, 70, 75, and 57 cuts before reaching the load limit on the scale, so it belongs in category X, with Y% certainty." I do acknowledge the investment of time and money. To me it makes sense to look for ways to address that aspect of the endeavor.

And of course, I appreciate the work done by you and others in this forum. I respect you and anyone who puts the time and effort into doing meaningful and consistent research in a field they care about.
Actually, corporations do employ scientists. Not sure what that false dichotomy has to do with anything. I am personally more concerned with how testing is done, every time someone tries to bolster an argument with their credentials, they invariably have a weak argument.

Nobody is looking to make money off of this. But it does take a LOT of time and effort, I used to do a lot of testing also, and got tired of it, got a family to provide for. You have every Tom, Dick, and Harry, asking for you to try this or that. My reply is always - why don't YOU buy the knife and try it? You'd be very surprised at how incredibly ungrateful most of the forum population is, no joke. I still test at times, but only to satisfy my own curiosity.

Every time you publish results, somebody gets offended and attacks you. Every time. If we tried to have a database of testing, you could probably expect lawsuits from makers who didn't like where their knives placed. What if a small maker had a knife slip through with a bad heat treat that got a horrible review - and lost his business? Not good. That happened in my testing (bad heat treat), so I quietly told him about it and kept my mouth shut. The rest of his knives were great.

I think the smartest thing that Mr. Ankerson ever did was to refuse to publish the data and put the knives in general categories.


And they will just have to get over it or jump off a cliff or something....
I knew his name would come up! :D
 
Not sure what that false dichotomy has to do with anything.

You are correct that the way I typed it, it is a false dichotomy. What I meant was "research done within corporations" vs. "peer reviewed science intended to build upon and be built upon by the work of other scientists."

Of course, if I had spelled it all out, the sentence would have been a run-on. My writing isn't the best, but I am a bit self conscious about certain things my teachers and professors warned me about when I was younger.
 
Back
Top