Big Data

AWS wants to drastically cut down AI hallucinations – here’s how it plans to do it



AWS’ new Automated Reasoning checks promise to prevent models from producing factual errors and hallucinating, though experts have told ITPro that it won’t be an all-encompassing preventative measure for the issue.

Announced as part of AWS re:Invent 2024, the hyperscaler unveiled the tool as a safeguard in ‘Amazon Bedrock Guardrails’ that will mathematically validate the accuracy of responses generated by large language models (LLMs).



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.