(I will get to knives eventually.)
Be careful. ChatGPT is notoriously bad at arithmetic. For example, ask Google, "What is 63 times 12345679?" and you will get the correct answer (or you can just type 63 * 12345679). Ask ChatGPT and you will get the wrong answer. If you ask ChatGPT that exact same question several times, and you can get half a dozen different wrong answers, because ChatGPT responses are randomized. Of course, OpenAI may figure out how to build a calculator into ChatGPT, or they may have done so in the past week or two since I tried ChatGPT.
If ChatGPT does not know the answer to a question, it will often just make up bullshit, like a college student who neglected to study for an exam. People often say that ChatGPT hallucinates, but what ChatGPT does has nothing to do with actual hallucinations. In philosophy, "bullshit" has a technical meaning that is exactly what ChatGPT does. That is, ChatGPT makes up stuff without regard to whether it is right or wrong. And in fact, ChatGPT does not really know whether anything is right or wrong except for certain special topics on which it has received intensive tutoring.
ChatGPT is incapable of logical reasoning. It will often contradict itself from one sentence to the next unless you give it softball questions where it can just paraphrase material from the Internet that it was trained on. ChatGPT is bad at questions where the answer requires combining two or more nontrivial operations. ChatGPT is especially bad at questions where the answer involves what computer programmers call loops--that is, where you repeat some operation with variations until some criterion is satisfied for stopping.
A couple weeks ago, I asked ChatGPT how to sharpen a typical chef's knife. It regurgitated routine advice about using a whetstone, although it was very confused about the difference between a water stone and an oil stone. It said to start with a coarse grit and proceed to finer grits, although it had some very strange ideas about what was coarse and what was fine. Notably, it did not mention burrs. I asked what a burr was, and the answer was fairly good. But try as I might, I could not get ChatGPT to incorporate the idea of burrs into the sharpening process correctly. It mostly said that raising a burr was something you did at the very end of the whole process of sharpening. So ChatGPT used the right words, and put those words into grammatically correct sentences, but many of the details were wrong. And ChatGPT could not put the words into the correct places in the over-all process of sharpening.