New AI tools risk undermining consumer trust, warns competition watchdog

(picture posed by model/Dominic Lipinski/PA) (PA Archive)
(picture posed by model/Dominic Lipinski/PA) (PA Archive)

Consumers are at risk of being exposed to a fresh wave of scams, misinformation and manipulation through new AI tools, a report by the UK competition regulator has warned.

The Competition and Markets Authority (CMA) has said the evolution of large language models (LLMs) and other machine learning techniques exacerbate existing online harms and risk undermining consumer trust in businesses who use them.

Fake reviews on e-commerce websites will become much easier for bad actors to create at scale using the technologies, according to the report, while scam phishing emails are set to become more personalised and convincing, and users could also be manipulated by information shared with them from LLM chatbots.

Chatbot ‘hallucinations,’ in which a LLM unwittingly creates false information that appears plausible, is also likely to increase the circulation of misinformation, the CMA warned, citing examples of a chatbot fabricating medical notes and making false allegations against individuals.

It also cited a study of a chatbot which was able to reinforce user beliefs, and warned that an LLM “could conceivably engage in deceptive conduct in order to achieve the goals or tasks of their users.”

The watchdog has set out a list of high-level principles, including on accountability and transparency, that it wants businesses using these technologies to abide by.

CMA boss Sarah Cardell said: “The speed at which AI is becoming part of everyday life for people and businesses is dramatic.

“There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.

“In rapidly developing markets like these, it’s critical we put ourselves at the forefront of that thinking, rather than waiting for problems to emerge and only then stepping in with corrective measures.”