A cyber security expert has warned that artificial intelligence (AI) has been increasingly used in election campaigns worldwide, targeting voters often through personalised messaging, chatbots, and deepfake videos.
Sean Hoyte, managing director of Cyber Security and Anti-Crime Services Ltd told Newsday these tools, while powerful in reaching voters, can also serve as vehicles for disinformation.
He said the electorate must be wary of manipulation.
"Additionally, political parties can automate engagement through chatbots and deepfake videos, significantly enhancing outreach," Hoyte said.
"However, the electorate must be wary of manipulation, which can take the form of fake news articles, altered images and deepfake videos. This serves as an excellent propaganda tool designed to discredit political opponents and mislead voters."
Hoyte is a cyber security and anti-crime consultant.
He holds a master's degree in Forensic Information Technology from the University of Portsmouth, UK, and a bachelor's degree in Computing Information Systems from London Metropolitan University.
To combat AI-driven misinformation, Hoyte advised the electorate to adopt the "ABC of investigation"—Assume nothing, Believe no one, and Check everything.
"The electorate must take time to check the credibility of sources and look for inconsistencies in images and videos," he said.
"For the more tech-savvy voters, using image searches, timestamp analysis, and AI detection tools can help identify manipulated content."
He said voters must remain sceptical of sensational claims and verify information before sharing or taking action.
As the country gears up for the April 28 general election, AI and social media campaigns seem to have emerged as a central factor in the political landscape.
With a shortened campaign period limiting traditional public engagements, political parties have turned to social media as their primary means of communication.
This shift reflects a broader global trend in political campaigning, where AI-driven tools are used to enhance outreach and engagement.
However, this development has not come without controversy. The use of AI in politics has raised concerns about misinformation, propaganda, and the potential manipulation of voters.
There have been allegations of bullying and racist slurs against newly appointed Prime Minister Stuart Young, stemming from his teenage years, which have sparked debate over the authenticity of online posts.
One such post has emerged, claiming to be a testimony from the "victim" involved with the PM.
However, Newsday's checks could not confirm the identity of the post's author, who claimed to share a firsthand account of the alleged incident.
The accuracy of this and other claims remains in question, fuelling discussions about the reliability of AI-generated content in political discourse.
A media release from PNM's public relations officer, Faris Al-Rawi SC, on March 23, cautioned the public about the deceptive use of AI-generated content.
He charged that,