Tai Goo,
That was one hell of a link you posted there. Took me a while to digest it and to consider my own philosophical underpinnings before I spit out a response.
While I found myself agreeing with many of the authors criticisms, I wasn't sure that the alternative method presented by the same author in the end of the article actually differs in any real sense to what has been verbalized already as the scientific method above. The alternative is presented as the following:
"1) Question: You begin with a puzzle, a mystery, a surprising event: You don't understand a phenomenon which has occurred, or which occurs regularly.
2) Hypothesis: Try to imagine a process or situation which meets this criterion: If what you've imagined were really the case, the puzzling phenomenon would make sense.
3) Testing: Find out if the hypothesis itself makes sense, by exploring its other consequences: If it were correct, what else should be observed? What would show that the hypothesis is wrong?
4) Evaluation: Decide whether the results of testing warrant accepting the hypothesis as a plausible explanation for the phenomenon. Consider the possibility of further testing, and whether other hypotheses might provide a better explanation."
Functionally there is little difference here, other then missing (5) peer review. The main difference that I could see is something rather subtle. I would suggest that (#1) in the scheme above tends to be much more constrained than idealized by the author. The investigator usually has some demonstrated expertise in the field of inquiry prior to attempting to proceed to (2). In otherwords, you are already familiar with existing theories (or if you prefer operating paradigms). This leads to a much greater degree of constraining of the types of hypotheses that can be generated to address #2.
The hypothesis must be consistent with existing evidence and this requires familarity with competing theory, past experiments and current thinking regarding the subject matter. I think it is at this stage that the author of the link confuses rampant use of both inductive and deductive approaches in science. If you really think about the constraints associated with (1), then deductive reasoning is by far the more common approach.
Of course this is also a common critique of science - 'we have blinders on that have been placed there by our own training' Indeed one of the stated reasons for rapid advancements when a field adopts more multi-disciplanary approaches is that it produces more creativity at the level of the hypothesis generation stage. I think this is a valid criticism, but also to a matter of degree. Unconstrained creativity (the equivalent of scientific anarchy if you will) is also arguably not very fruitful - after all the number of alternative hypotheses is always infinite!
Finally, and as was stated previously, peer review is an essential component of the method. Peer review cannot guarentee that good science has occurred, but it is an attempt to provide objective evaluation for a set of experimental results and the interpretation of those results. People sometimes criticise peer review in its failures, for example in its inability to detect fraud and data fudging. Peer review cannot necessarily detect fraud at the time a publication is put forward, although over time most frauds are usually recognized as signfiicant outliers in the weight of evidence database. This of course takes time and is one reason why scientists tend to be a bit cautious in terms of over-interpreting results. It is more difficult to detect the other type of fraud - the manipulator producing false data that fully supports existing theory. This can only be detected when experiments have been replicated by independent sources.
In practice, peer review can also be said to operate at two different steps in the science method. We are usually familiar with peer review occuring at step #5. But peer review also occurs between step 1 and 2. This is peer review at the proposal stage. You have to demonstrate your expertise and conceptual model before the funder gives you the cash to run the experiment.
So there are two factors above that can hinder, or at least produce lags to knowledge gain using the scientific method. Arguably, both factors are related to the need to generate weight of evidence from multiple experiments and observations. The first is related to the constraining of creativity to the expertise of a given sub-discipline, the second is the widespread belief that no single experiment is likely to be so perfect in its execution that it will cause complete collapse of existing theory.
So finally back to the OP - how does this all relate to knife making? I really don't know
As I said in my previous post, facts really aren't part of scientific dialogue. Conceptual models are designed to provide, 'the least worst' explanation of the truth. Progress is made by recognizing that continued improvements in the conceptual model take place. The adminstration of the method recognizes the need to repeat experiments by different groups while also simultaneously testing as many different aspects of the theory as possible. These features instill a conservative component to scientific progress that isn't always economically optimized.