From Hype to Reality How Caribbean Businesses Can Stay Secure in the Age of AI
By Angelo Liriano Cybersecurity Executive, Cisco Caribbean
LINKAGE Q4 (2025) - HSSE 360: INNOVATION FOR RESILIENCEAcross Trinidad & Tobago and the wider Caribbean, AI has quietly moved from buzzword to daily habit. Employees are asking chatbots to draft emails, summarise reports and even write code. Many leaders didn’t formally “launch” an AI programme—people simply started using public tools to get their work done faster.At the same time, some of the most critical sectors for our society, such as public administration, education, banking and insurance, are going further. Governments are exploring AI assistants to help citizens navigate services. Universities are experimenting with AI to support students and automate back-office tasks. Banks and insurers are using AI to score risk, detect fraud and personalise customer interactions. In short, AI is no longer something “out there”; it now sits inside the workflows that run our economies.AI is getting inside every workflow – but where are the guardrails? We are looking into huge benefits, but also a quieter reality: every new AI use case adds a layer of security risk, safety risk and privacy risk that boards and regulators will eventually ask about.On the employee side, one of the biggest risks is surprisingly simple. It only takes one well-intentioned person pasting a client contract, a confidential financial forecast or a list of high-value customers into a public GenAI tool for that information to leave the organisation’s control. Even if the provider promises to protect data, the company is still accountable to its customers, shareholders and regulators. In parallel, AI systems are being used for highly personal and sensitive topics. If an AI tool embedded in an HR portal, wellness app or citizen-facing service is asked how to self-harm, abuse a partner or evade law enforcement, the organisation providing that channel has a real duty of care. Guardrails are not a “nice to have”; they are the foundation of basic governance.For companies building or fine-tuning their own models, the risk surface expands again. When you train an AI system on your own data and expose it to customers or citizens, you are no longer just a user of technology; you are, in effect, an AI producer. That comes with responsibilities, and with new threats such as data and model poisoning. In a data poisoning attack, adversaries subtly manipulate the training data so that the model learns the wrong behaviour in specific situations, for example, consistently misclassifying certain types of transactions. In model poisoning, attackers can modify an open-source model and re-upload it so it behaves normally most of the time but produces misleading or harmful answers when triggered by certain prompts.The “PoisonGPT” experiment made this very real. Researchers took a well-known open-source model, introduced targeted changes and then published it to a public repository under a legitimate-sounding name. The model looked fine, but it was able to spread tailored misinformation such as “The Eiffel tower is located in Rome”. That demonstration by the French researchers Mithril Security was not a theoretical paper; it was a warning for every organisation that pulls models or datasets from the internet and plugs them into business critical workflows.At the same time, AI is moving onto the attacker’s side of the chessboard. The most striking example disclosed so far came from Anthropic, the company behind the Claude model. In a recent report, they described how a Chinese state-aligned group used Claude to run a full cyber espionage campaign. Around 80–90% of the operational work in that campaign was done by AI: mapping targets, researching technologies, generating scripts and helping sequence the attack steps. Humans were still in charge, but AI dramatically accelerated the process.This shift is happening against a background of rising pressure on critical sectors. The October 2025 Cisco Talos Incident Response report highlighted that public administration has now become the most targeted vertical in their caseload, based on real incidents they investigated globally. Immediately behind it came telecommunications, healthcare and financial services. That mirrors what many of us in the region are feeling: governments, regulators, telecom operators, hospitals and banks sit at the centre of our economies, and any disruption during an AI-accelerated attack would have outsized impact on citizens and businesses.So, what can Caribbean companies realistically do?The first step is cultural, not technical: building a risk-centric mindset around AI. That means treating AI like any other powerful capability—useful but governed. Employees need awareness training that goes beyond traditional phishing examples to cover AI-specific situations: what can and cannot be pasted into AI tools, how to spot AI-generated scams, what to do if an AI system produces a suspicious or harmful answer. Leaders need to send a clear message that AI is strategic, but that security, safety and privacy are non-negotiable.The second step is to adopt a reference framework, so the conversation is not driven purely by instinct or headlines. The NIST AI Risk Management Framework is one of the most widely recognised starting points. It encourages organisations to identify where AI is used, which data it touches, who could be harmed if something goes wrong and what controls are needed to manage that risk. For a Caribbean context, this does not have to be a massive compliance project. Even a lean adaptation mapping key AI use cases by industry aligned with basic risk criteria can help align boards, CISOs, CIOs, legal and business leaders around a common language.The third step is turning principles into concrete controls. Organisations should start by gaining visibility into AI usage: which tools are being used on corporate devices and networks, what kinds of data are flowing into them, where high-risk use cases sit. They should maintain a clear inventory of their AI assets such as models, datasets and applications and continuously validate them, including through “algorithmic red teaming,” much like we already do with our network. From there, technical guardrails can be introduced: preventing sensitive data from being sent to unauthorised tools, enforcing access controls around internal AI platforms, adding safety and content filters to citizen- or customer-facing systems and integrating AI signals into broader security monitoring and incident response.It is also crucial to clearly define data requirements before launching any AI development initiative: what data is needed, where it is stored, who can access it. This is where Sovereign AI becomes especially powerful for the Caribbean—by keeping sensitive data, models and critical workloads under local or regional control, we can build AI that truly reflects our laws, values and realities. Trinidad & Tobago is uniquely positioned to lead in this space: a strong base of IT and engineering talent, a strategic geographic position linking America, and deep commercial relationships across the region make it an ideal hub for trusted, locally governed AI that delivers better services for both businesses and citizens.Finally, human oversight remains essential. No matter how advanced models become, we should not delegate high-impact decisions entirely to machines. Credit approvals, major payments, disciplinary actions, clinical recommendations and changes to critical infrastructure should all retain a human in the loop, with clear escalation paths when AI outputs conflict with company values, laws or simple common sense. Ethics, in this sense, is not a separate academic topic; it is the disciplined practice of asking, “Should we act on this AI recommendation?” before we do.For members of the wider Caribbean business community, AI is an extraordinary opportunity to innovate, differentiate and serve citizens and customers in new ways. But the move from hype to reality will not be measured by how many tools we deploy. It will be measured by whether our customers, partners and regulators can trust the way we use them and whether we are ready for a world where malicious actors are using AI too. ABOUT THE AUTHOR Angelo Liriano is a Cybersecurity Executive at Cisco Caribbean. He can be contacted via Email at aliriano@cisco.com and on LinkedIn at Linkedin.com/in/angeloliriano