Actually, that sounds "conceptually wrong" since you want to test "edge retention", not "edge retention after X passes".
Let me clarify. The passes were SHARPENING passes, and were used to raise the wire edge, which indicated that the sharpening was sufficient. The wire edge was then removed. No abrasives were used to dull the blades.
Not sure how this should be done but you should start with blades of equal sharpness.
Our sharpness test was a PUSH sharpness test, not a slicing sharpness test (the CATRA REST test). All the tested blades had approximately equal REST sharpness values at the beginning of the test.
I've gone back and forth over whether blade performance (speaking in the generic term, not the specific term "edge retention") should be evaluated by starting with equivalent sharpness. It's my observation that different steels have different ultimate sharpness limits. If steel A can be sharpened to a sharper edge than steel B, why shouldn't it get credit? On the other hand, getting blades really sharp takes as much skill as knowledge, as anybody who's tried to copy Wayne Goddard's sharpening techniques will appreciate. And having test results depend on the skill of the operator is not good practice. That's why we settled for a constant sharpening medium, a sharpening fixture, and a quite repeatable process, as evidenced by consistent sharpness of test blades after sharpening.
On a more general note, rather than "equal sharpening", you should try something like "competitive sharpening": each blade sharpened in the way it's steel works best: you would have something like "FF at its best is better than S90v at its best", not "FF is better than S90V... in a X degrees edge after Y passes...".
I agree with this statement in principle, but think it's unwieldy in practice. (And I won't refer to FFD2 or S90V in this thread, because I want this thread to be about testing practices, not blade materials.) It seems ideal to test the best possible performance of steel A compared to the best possible performance of steel B.
The hitch is that nobody knows the best possible performance of either steel A or steel B. And if I'm a knife maker who believes steel B is better than steel A, any time I spend trying to find the best possible performance of steel A is wasted. And given the number of different steels, and different heat treating processes, etc., I'd spend all my time trying to determine the optimal performance of some steel I'm not interested in.
I can think of two ways around that. The first is that, when you're comparing steels, you do a range of experiments. In Diamond Blade's case, it was a bunch of steels at the same geometry. In other cases, it might be one steel at a bunch of different geometries. Then you evaluate the results, and explain them as clearly as possible. If someone disagrees with your results, they're free to run the same tests, and see if they get the same results. That's how science works.
The second way is to have a standard reference blade. If it were up to me, I'd choose a razor blade produced by a reputable manufacturer, such as Olfa. It should have constant geometry and constant steel properties. There would need to be proof that such a blade was, in fact, consistent. Then, once there is a consistent reference blade, every manufacturer could compare their blade to the reference blade. And somebody who was interested could compare the blades from different manufacturers by comparing their performance relative to the reference blade.
Right now, I don't think either one of these methods is practical. So I think the best approach is for testers just to be completely open about their test methods, data, and analysis.
Carl
-------------------------------
It is not necessary to believe things in order to reason about them
It is not necessary to understand things in order to argue about them.
- P.A. Caron de Beaumarchais, French Author, 1732-1799