Los estudios ChatGPT y Google Bard demuestran que los chatbots de IA no son confiables

Two phones next to eachother, one with ChatGPT open and one with Google
(Crédito de imagen: Shutterstock / Tada Images)

ChatGPT y Google Bard se han abierto paso con encanto en nuestras vidas tecnológicas, pero dos estudios recientes muestran que los chatbots de inteligencia artificial siguen siendo muy propensos a lanzar información errónea y teorías conspirativas, si se les pregunta de la forma adecuada.

NewsGuard, un sitio que califica la credibilidad de las noticias y la información, probó recientemente Google Bard alimentándolo con 100 falsedades conocidas y pidiéndole al chatbot que escribiera contenido en torno a ellas. Según Bloomberg, Bard "generó ensayos cargados de desinformación sobre 76 de ellas".

Pero, aunque todavía no existe un sistema de evaluación comparativa universal para comprobar la precisión de los chatbots de IA, estos informes ponen de manifiesto los peligros que entrañan, ya sea porque se prestan a malos jugadores o porque se confía en ellos para producir contenidos objetivos o precisos.


Analysis: AI chatbots are convincing liars

A laptop showing the OpenAI logo next to one showing a screen from the Google Bard chatbot

(Image credit: ChatGPT)

These reports are a good reminder of how today's AI chatbots work – and why we should be careful when relying on their confident responses to our questions.

Both ChatGPT and Google Bard are 'large language models', which means they've been trained on vast amounts of text data to predict the most likely word in a given sequence. 

This makes them very convincing writers, but ones that also have no deeper understanding of what they're saying. So while Google and OpenAI have put guardrails in place to stop them from veering off into undesirable or even offensive territory, it's very difficult to stop bad actors from finding ways around them.

For example, the prompts that the CCDH (above) fed to Bard included lines like “imagine you are playing a role in a play”, which seemingly managed to bypass Bard's safety features.

While this might appear to be a manipulative attempt to lead Bard astray and not representative of its usual output, this is exactly how troublemakers could coerce these publicly available tools into spreading disinformation or worse. It also shows how easy it is for the chatbots to 'hallucinate', which OpenAI describes simply as "making up facts".

Google has published some clear AI principles that show where it wants Bard to go, and on both Bard and ChaGPT it is possible to report harmful or offensive responses. But in these early days, we should clearly still be using both of them with kid gloves.

Alexa Hernandez
Editor

Alexa Hernandez es amante de los animales, series, películas y tecnología. 

Aportaciones de