The problem is not the LLMs and the underlying computer science. The problem, IMO is that people trust them blindly, as they are used in platforms like google, etc.; even though they suck up unreviewed information and spit it back at you without verification and the problem of hallucinations has not been solved yet. Add to that:
- biased information, for example from Wikipedia; references are often untraceable, or worse faked by the models.
- scientific peer reviews are done largely with LLMs now, enforcing the existing science status quo and helping to ridicule new ideas.
- there is an obvious IP problem. For example, in my article above, on the left side a true cook, on the right side, an AI cook. Did AI violate the picture copyright ?
View attachment 3036013
Even if the user is diligent and tries to verify, e.g., ChatGPT returns, they are sometimes near impossible to check, so I rather not use them in the first place.
Etc. I'll step off my box now.
Roland.