Cloud

Researchers tackle AI fact-checking failures with new LLM training technique – Computerworld



“They could give the model a genetics dataset and ask the model to generate a report on the gene variants and mutations it contains,” IBM explained. “With a small number of these seeds planted, the model begins generating new instructions and responses, calling on the latent expertise in its training data and using RAG to pull facts from external databases when necessary to ensure accuracy.”

This might sound rather like a way of implementing RAG. The difference is that these specialist models are only called upon, via an API, when they are needed, the researchers said.

Still bad at facts

According to Mark Stockley, who co-presents The AI Fix podcast with Graham Cluley, the underlying problem is that LLMs are widely misunderstood. They are good at specific tasks but are not, nor were ever intended to be, uncomplicated fact- or truth-checking engines.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.