Across the Middle East, Turkiye, and Africa (META) region, 81.7% of professionals are leveraging AI tools to streamline tasks, yet a mere 38% have received training on the cybersecurity pitfalls that could expose sensitive data to leaks, hacks, or manipulative “prompt injections.”
The findings, drawn from Kaspersky’s 2025 research titled “Cybersecurity in the Workplace: Employee Knowledge and Behaviour,” highlight a region-wide surge in AI integration. Conducted by market research firm Toluna, the survey polled 2,800 employees and business owners across seven countries — including Türkiye, South Africa, Kenya, Pakistan, Egypt, Saudi Arabia, and the UAE — who rely on computers for their jobs. Trends held steady in key African markets like South Africa, Kenya, and Egypt, underscoring a continental pattern of rapid adoption without commensurate safeguards.
For most respondents, AI isn’t abstract theory; it’s a daily reality. A whopping 94.5% grasp the concept of generative AI, the tech behind tools like ChatGPT that create text, images, and more from user prompts. Usage breaks down as follows: 63.2% tap AI for writing or editing documents, 51.5% for crafting work emails, 50.1% for data analytics, and 45.2% for generating visuals like images or videos. “These tools are automating the mundane, boosting productivity in ways we couldn’t have imagined a few years ago,” the report notes, but warns that unchecked enthusiasm risks turning innovation into a liability.
At the heart of the study lies a glaring preparedness chasm. One in three professionals (33%) reported zero AI-related training from their employers.
Among those who did get instruction, the emphasis skewed heavily toward practical perks: 48% learned how to wield AI effectively, including crafting optimal prompts. Cybersecurity? Only 38% touched on it — a critical oversight when AI’s hunger for data can inadvertently feed it proprietary information to external servers, or fall prey to sophisticated attacks like data poisoning.
Compounding the issue is the shadowy underbelly of “shadow IT,” where employees deploy unvetted tools sans corporate oversight. While 72.4% of respondents said generative AI is greenlit at their workplaces, 21.3% operate under outright bans, and 6.3% navigate in a fog of uncertainty. This patchwork of policies leaves organizations exposed, as personal devices and rogue apps blur the lines between work and risk.
Experts call for a measured middle path. “For successful AI implementation, companies should avoid the extremes of a total ban as well as a free-for-all,” advises Chris Norton, General Manager for Sub-Saharan Africa at Kaspersky. “Instead, the most effective strategy is a tiered access model, where the level of AI use is calibrated to the data sensitivity of each department. Backed by comprehensive training on cybersecurity aspects of AI, this balanced approach fosters innovation and efficiency while rigorously upholding security standards.”
Kaspersky’s playbook for securing AI in the enterprise offers actionable steps to bridge these gaps:
- Prioritize employee education: Roll out targeted training on responsible AI habits. Kaspersky’s Automated Security Awareness Platform provides ready-made modules on AI security to slot into existing programs.
- Empower IT teams: Equip specialists with defenses against AI-specific exploits via specialized courses, such as the “Large Language Models Security” training in Kaspersky’s Cybersecurity Training portfolio.
- Fortify devices: Mandate endpoint protection on all work and BYOD (bring-your-own-device) gadgets. Kaspersky Next solutions shield against phishing lures and trojanized AI apps, where cybercriminals increasingly hide infostealers in fake tools.
- Track and adapt: Run periodic surveys to gauge AI’s footprint — from frequency to functions — then tweak policies based on the risk-benefit calculus.
- Deploy smart filters: Implement AI proxies that scrub sensitive details (like client IDs) from queries in real-time and enforce role-based controls to nix misuse.
- Draft a holistic policy: Formalize guidelines covering bans on high-risk uses, approved tool lists, and ongoing monitoring. Kaspersky’s free resource on securely implementing AI systems serves as a blueprint.
As AI permeates boardrooms and back offices alike, the Kaspersky study serves as a wake-up call for META’s business leaders.
With adoption outpacing awareness, the onus is on organizations to channel this technological tide into secure, sustainable waters — lest the very tools meant to empower become unwitting gateways for tomorrow’s breaches.