Artificial Intelligence
As artificial intelligence (AI) continues to reshape global industries, GOMYCODE Kenya has launched its transformative ‘GOMYCODE for Business’ program. This innovative initiative is poised to empower Kenyan enterprises, providing the essential digital skills needed to thrive in an increasingly AI-driven future.
Announced during the company’s second-anniversary celebrations in Nairobi, this marks a significant evolution for GOMYCODE, expanding its commitment from nurturing individual talent to fortifying entire organizations against the challenges of a rapidly changing digital economy.
Kenya’s journey towards digital transformation is accelerating, yet a critical obstacle persists: a pervasive digital skills gap. Despite recognizing the imperative for innovation, many companies struggle to access or develop the skilled workforce necessary to implement new technologies.
The World Bank’s Kenya Digital Economy Report 2024 reveals that a substantial 70 percent of Kenyan businesses identify digital skills shortages as a primary barrier to adopting modern technological solutions.
For the past two years, GOMYCODE Kenya has been a key player in cultivating digital talent, helping young professionals launch successful careers. Now, celebrating its second year, GOMYCODE has unveiled ‘GOMYCODE for Business’ – a tailored upskilling program designed to guide companies through digital transformation with confidence and agility.
“The digital skills gap is no longer just a concern for job seekers; it’s a critical issue impacting business survival,” emphasized Mellany Msengezi, Country Director at GOMYCODE Kenya.
“Our two years have been dedicated to building a robust tech talent pipeline. Now, we are extending our expertise directly to companies, helping them prepare for the imminent wave of digital disruption.”
Distinguishing itself from traditional corporate training models, ‘GOMYCODE for Business’ embraces a modular, outcomes-focused methodology.
The program offers hands-on, practical training in high-demand areas such as AI literacy, data analytics, cloud technologies, and modern software tools.
It provides flexible learning formats, accommodating in-office, remote, and hybrid teams, ensuring that newly acquired skills are immediately applicable to real-world business challenges. Early pilot programs are already underway across Africa in diverse sectors including finance, logistics, manufacturing, and healthcare.
“The demand for skilled talent is undeniable. What’s often missing is structured, tech-centric training that truly adapts to the unique needs of businesses,” noted Yahya Bouhlel, CEO and Founder of GOMYCODE.
Kenya, with its vibrant youth demographic and flourishing tech ecosystem, is a recognized hub for innovation in Africa. However, experts caution that without proactive investment in workforce digital readiness, companies risk losing their competitive edge in the global marketplace.
GOMYCODE is determined to address this challenge, positioning itself not just as a coding school, but as a crucial strategic partner in shaping Kenya’s digital future by seamlessly connecting learning outcomes with business performance. The message is clear: the AI era is here, and Kenyan businesses must embrace it to thrive.
Amazon Accelerates AI-Powered Automation, Sparking Workforce Concerns
Amazon is aggressively pushing forward with the automation of its vast warehouse network, leveraging artificial intelligence and advanced robotics, a move that is intensifying scrutiny on the future of its human workforce.
The e-commerce giant, renowned for its rapid delivery infrastructure, showcased a new generation of high-tech tools in Silicon Valley, asserting that AI is not only driving innovation but also dramatically speeding up its development.
At a conference held within a massive distribution center, Amazon unveiled its “Blue Jay” robotic arms, designed for efficient picking, sorting, and consolidating tasks at individual workstations.
The Blue Jay, currently undergoing testing in South Carolina, follows the earlier introduction of the “Vulcan” robot, which Amazon described as possessing a “sense of touch” for order fulfillment.
Tye Brady, Amazon Robotics chief technologist, credited AI with slashing the design, build, and deployment time for Blue Jay by approximately two-thirds, reducing the cycle to just over a year.
“That’s the power of AI,” Brady stated, adding, “Expect more rapid development cycles like this…we’re on a trajectory to supercharge the scale and impact of innovation with our operations.”
Despite these advancements, the acceleration of robotics and AI in its operations has reignited concerns about job displacement.
Brady sought to allay these fears, emphasizing Amazon’s track record of creating more U.S. jobs than any other company in the past decade.
“To our frontline employees, here’s my message,” Brady remarked. “These systems are not experiments. They’re real tools built for you to make your job safer, smarter and more rewarding.”
However, a recent report by The New York Times painted a different picture, suggesting that robotics could enable Amazon to bypass hiring 160,000 workers within two years, even as its online retail business continues to expand.
This automation could significantly reduce the need for new hires, particularly temporary staff essential for peak holiday shopping seasons.
Beyond the robotic hardware, Amazon also demonstrated an AI agent designed to optimize the management of both robots and human warehouse teams.
The company’s technological reach extends beyond distribution centers, with demonstrations of camera-equipped smart glasses providing navigation and delivery instructions to drivers.
As Amazon continues to integrate AI and robotics deeper into its operational fabric, the balancing act between technological advancement and its impact on human employment remains a critical point of discussion.
META Workplaces Embrace AI Frenzy as 82% Dive into Daily Tools While Just 38% Grasp Cybersecurity Defenses
Across the Middle East, Turkiye, and Africa (META) region, 81.7% of professionals are leveraging AI tools to streamline tasks, yet a mere 38% have received training on the cybersecurity pitfalls that could expose sensitive data to leaks, hacks, or manipulative “prompt injections.”
The findings, drawn from Kaspersky’s 2025 research titled “Cybersecurity in the Workplace: Employee Knowledge and Behaviour,” highlight a region-wide surge in AI integration. Conducted by market research firm Toluna, the survey polled 2,800 employees and business owners across seven countries — including Türkiye, South Africa, Kenya, Pakistan, Egypt, Saudi Arabia, and the UAE — who rely on computers for their jobs. Trends held steady in key African markets like South Africa, Kenya, and Egypt, underscoring a continental pattern of rapid adoption without commensurate safeguards.
For most respondents, AI isn’t abstract theory; it’s a daily reality. A whopping 94.5% grasp the concept of generative AI, the tech behind tools like ChatGPT that create text, images, and more from user prompts. Usage breaks down as follows: 63.2% tap AI for writing or editing documents, 51.5% for crafting work emails, 50.1% for data analytics, and 45.2% for generating visuals like images or videos. “These tools are automating the mundane, boosting productivity in ways we couldn’t have imagined a few years ago,” the report notes, but warns that unchecked enthusiasm risks turning innovation into a liability.
At the heart of the study lies a glaring preparedness chasm. One in three professionals (33%) reported zero AI-related training from their employers.
Among those who did get instruction, the emphasis skewed heavily toward practical perks: 48% learned how to wield AI effectively, including crafting optimal prompts. Cybersecurity? Only 38% touched on it — a critical oversight when AI’s hunger for data can inadvertently feed it proprietary information to external servers, or fall prey to sophisticated attacks like data poisoning.
Compounding the issue is the shadowy underbelly of “shadow IT,” where employees deploy unvetted tools sans corporate oversight. While 72.4% of respondents said generative AI is greenlit at their workplaces, 21.3% operate under outright bans, and 6.3% navigate in a fog of uncertainty. This patchwork of policies leaves organizations exposed, as personal devices and rogue apps blur the lines between work and risk.
Experts call for a measured middle path. “For successful AI implementation, companies should avoid the extremes of a total ban as well as a free-for-all,” advises Chris Norton, General Manager for Sub-Saharan Africa at Kaspersky. “Instead, the most effective strategy is a tiered access model, where the level of AI use is calibrated to the data sensitivity of each department. Backed by comprehensive training on cybersecurity aspects of AI, this balanced approach fosters innovation and efficiency while rigorously upholding security standards.”
Kaspersky’s playbook for securing AI in the enterprise offers actionable steps to bridge these gaps:
- Prioritize employee education: Roll out targeted training on responsible AI habits. Kaspersky’s Automated Security Awareness Platform provides ready-made modules on AI security to slot into existing programs.
- Empower IT teams: Equip specialists with defenses against AI-specific exploits via specialized courses, such as the “Large Language Models Security” training in Kaspersky’s Cybersecurity Training portfolio.
- Fortify devices: Mandate endpoint protection on all work and BYOD (bring-your-own-device) gadgets. Kaspersky Next solutions shield against phishing lures and trojanized AI apps, where cybercriminals increasingly hide infostealers in fake tools.
- Track and adapt: Run periodic surveys to gauge AI’s footprint — from frequency to functions — then tweak policies based on the risk-benefit calculus.
- Deploy smart filters: Implement AI proxies that scrub sensitive details (like client IDs) from queries in real-time and enforce role-based controls to nix misuse.
- Draft a holistic policy: Formalize guidelines covering bans on high-risk uses, approved tool lists, and ongoing monitoring. Kaspersky’s free resource on securely implementing AI systems serves as a blueprint.
As AI permeates boardrooms and back offices alike, the Kaspersky study serves as a wake-up call for META’s business leaders.
With adoption outpacing awareness, the onus is on organizations to channel this technological tide into secure, sustainable waters — lest the very tools meant to empower become unwitting gateways for tomorrow’s breaches.
AI in Finance: How Apriem Advisors Leverages Technology to Elevate Client Experience
Artificial intelligence is transforming the financial industry at a remarkable pace. From data analytics to predictive modeling, AI is reshaping how advisors understand markets, manage risk, and serve clients. Yet, while some view AI as a replacement for human expertise, at Apriem Advisors, we see it differently. AI is not here to replace advisors—it is here to empower them, enabling more personalized, insightful, and proactive financial guidance.
At Apriem, we integrate AI into our advisory process to enhance, not replace, the human connection Our advisors use AI tools to analyze complex portfolios, identify market trends, and uncover opportunities for wealth growth and preservation. This allows our team to spend less time on data crunching and more time building meaningful relationships, understanding client goals, and delivering strategies tailored to their unique circumstances.
What sets Apriem apart is our focus on behavioral finance and multigenerational planning. AI helps us identify patterns and risks that may otherwise go unnoticed, but it is our human advisors who interpret these insights in the context of your values, legacy, and long-term objectives. This combination of technology and human expertise ensures our clients receive both precision and empathy in every recommendation.
Moreover, AI allows Apriem advisors to provide real-time, proactive guidance. By monitoring market shifts and portfolio performance continuously, our team can alert clients to opportunities or potential risks before they arise. This level of attentiveness is simply not possible without the strategic application of AI, yet it remains grounded in the judgment and care of our seasoned advisors.
Our approach demonstrates that technology and human insight are most powerful when they work together. At Apriem Advisors, AI enhances our ability to serve, educate, and protect our clients, while our advisors ensure every financial decision aligns with your life goals and family values.
If you want an advisory experience where AI amplifies expertise, but human judgment and relationships remain at the center, Apriem Advisors is here to guide you. Schedule a consultation today at www.apriem.com and discover how our innovative approach can help you navigate your financial journey with confidence and clarity.
Disclosures: https://www.apriem.com/disclosures/
Samsung Unveils TRUEBench A Real-World AI Productivity Benchmark for Enterprises
Samsung Electronics has unveiled TRUEBench (Trustworthy Real-world Usage Evaluation Benchmark), a proprietary framework developed by Samsung Research to measure AI productivity in enterprise settings.
TRUEBench aims to provide a realistic assessment of how large language models (LLMs) perform in real-world workplace tasks, emphasizing diverse dialogue scenarios and multilingual conditions that reflect actual business communication.
The benchmark covers common enterprise activities such as content generation, data analysis, summarization, and translation, spanning ten task categories and 46 sub-categories.
It relies on AI-powered automatic evaluation built from criteria collaboratively designed and refined by both human experts and AI to enhance reliability and reduce subjective bias. TRUEBench also supports cross-linguistic evaluation, comprising 2,485 test sets across 12 languages to mirror global workflows.
Test lengths vary from brief prompts of eight characters to lengthy documents exceeding 20,000 characters, ensuring coverage of simple requests as well as complex, multi-step tasks.
Samsung Research notes that current benchmarks are often English-centric, focus on single-turn Q&A, and fail to capture the complexities of real work environments. TRUEBench seeks to address these gaps by evaluating not only accuracy but also the implicit needs of users through detailed conditions that must be satisfied for a model to pass each test.
The evaluation process features a collaborative, iterative approach. Human annotators establish the initial criteria, which are then reviewed by AI to identify errors, contradictions, or overly restrictive constraints. Afterward, human evaluators refine the criteria again, and this cycle repeats to produce increasingly precise standards.
The resulting automatic evaluation applies these cross-verified criteria, promoting consistency and minimizing bias. For each test, all stipulated conditions must be met, enabling more granular and precise scoring across tasks.
TRUEBench data samples and leaderboards are hosted on Hugging Face, allowing public access to model performance comparisons. The platform supports up to five models per comparison and publishes metrics on both performance and efficiency, including data on average response lengths. Details about TRUEBench can be found on the Hugging Face page at https://huggingface.co/spaces/SamsungResearch/TRUEBench.
Paul (Kyungwhoon) Cheun, CTO of Samsung Electronics’ DX Division and Head of Samsung Research, underscored that TRUEBench embodies Samsung’s deep in-house AI productivity experience.
He stated that the benchmark aims to set new standards for productivity evaluation and reinforce Samsung’s position as a technology leader.
Is Deregulation the New AI Gold Rush? Inside Trump’s 90-Point Action Plan
In July 2025, the Trump administration released a 28-page blueprint, “Winning the Race: America’s AI Action Plan,” which reads like a modern-day gold-rush map.
It outlines over 90 policy positions across multiple agencies, all with a single goal: to remove barriers to AI innovation. This deregulatory approach is the heart of the plan.
Why It Matters Now
With China, the EU, and private rivals all racing to lead in AI, the administration argues that streamlined approvals and clearer guidelines will help U.S. firms innovate faster. Critics counter that speed may come at the expense of environmental safeguards, worker training, and protections against bias.
Staking the Claims: Anatomy of a Deregulatory Plan
The AI Action Plan isn’t a single law. It’s a series of executive orders and policy mandates designed to remove regulations and accelerate AI deployment. Key elements include:
- Fast-Tracked Permitting: An executive order specifically expedites federal permits for data centers and semiconductor manufacturing under existing NEPA and FAST-41 processes. This is a direct response to a major industry complaint about infrastructure build-out delays.
- AI Export Promotion: The Commerce and State Departments will partner with industry to export “secure, full-stack AI packages” to U.S. allies. This policy aims to build an American-led AI ecosystem abroad, free from foreign regulatory influence.
- “Woke” AI Guardrails Removed: New procurement rules will expunge DEI language from federal contracts, insisting that federally contracted AI must reflect “objective truth” free of ideological bias. This is a clear move to deregulate the ethical and social guardrails placed on AI development.
Prospecting for Performance: Technical Leaps & Public Pulse
The administration’s deregulatory push coincides with rapid technological advancements. The plan aims to build on these successes by removing what it sees as unnecessary red tape.
- Medical Device Claims: The FDA cleared 221 AI-enabled medical devices in 2023, up from just 6 in 2015. This surge in regulatory confidence is a direct result of new policies that allow companies to more quickly test and deploy AI tools.
- Benchmark Breakthroughs: AI performance on major benchmarks saw dramatic leaps in 2024. Scores on the MMMU, GPQA, and SWE-bench tests rose by 18.8, 48.9, and 71.7 percentage points, respectively. The plan argues that removing bureaucratic friction will accelerate this progress even further.
- Public Sentiment: This progress is met with public skepticism. A 2025 AI Index report found that only 38% of Americans believe AI will improve health and only 31% expect net job gains, a sentiment that echoes the wary attitude of a miner looking for fool’s gold.
Those gains, equivalent to finding gold flakes in untested soil, suggest that models are learning faster than before. But breakthroughs on test benches don’t always match real-world reliability.

Photo by Jordan Harrison on Unsplash
Data Centers: Growth & Impact
The new permit rules have unleashed a wave of data-center proposals:
- Energy Use:S. facilities consumed 176 terawatt-hours in 2023 (about 4.4% of national electricity) and could reach 12% by 2028.
- Emissions Toll: A Department of Energy survey of 2,100 centers found 105 million tonnes of CO₂ last year, more than half from fossil-fuel backup generators.
Faster approvals mean new investment dollars, but also sharper debates over rising energy demand and the environmental footprint of an AI boom.
Chips & Open Source: Who Benefits
Hardware and community code are twin engines of the AI economy:
- Semiconductor Exports: American chip sales hit $70.1 billion in 2024 (up 6.3%), driven by fabs in Texas and Oregon.
- Model Scans: Open-source security tools have analyzed 5 million AI models and flagged 350,000 potential biases or safety issues, proof that not every discovery is pure gold.
Eased export rules give chipmakers new markets, while looser sharing lets small labs, from university groups to bootstrapped startups, compete on the same playing field as hyperscale giants.

Photo by Igor Omilaev on Unsplash
Jobs at Risk & Opportunity
No gold rush is without its claim jumpers and ghost towns:
- Automation Risk: A McKinsey study warns that 30% of U.S. work hours could be automated by 2030, triggering 12 million occupational shifts.
Commenting on the human cost of these changes, Anirudh Agarwal, Director at OutreachX, cautions, “Accelerating permits without investing in people is like staking gold claims with no plan to refine the ore.”
Claim Holders and Ghost Towns: Potential Winners & Losers
The deregulatory “gold rush” is creating clear winners and losers.
● Winners:
- Chip Makers & Fab Operators: Can build new semiconductor “mines” under eased zoning regulations.
- Cloud Giants: Can erect hyperscale campuses with fewer permit delays.
- Open-Source Labs: Are designated as official prospectors, free to pan for new open-source models.
● Losers:
- Front-Line Workers: Face shuttered roles without guaranteed retraining.
- Civil Rights Advocates: Warn that removing DEI guardrails may lead to biased or unsafe AI in critical services.
Civil Rights & Accountability Concerns
Several advocacy organizations have raised alarms about the broader impact of unfettered deregulation:
- ACLU: The plan undermines state authority by directing the Federal Communications Commission to review and potentially override state AI laws, while cutting off ‘AI-related’ federal funding to states that adopt robust protections,” Cody Venzke, senior policy counsel with the American Civil Liberties Union.
- People’s AI Action Plan: Over 80 labor, civil-rights, and environmental groups released a rival blueprint, warning that unfettered deregulation caters to Big Tech, sidelines public interest, and undermines worker protections.
- State Protections: Critics note the federal plan overrides thoughtful local safeguards, stripping states of the right to prevent AI-driven bias in housing, healthcare, and law enforcement, and risks “unfettered abuse” of AI systems.
Mapping the Aftermath
Deregulation has opened the sluices for an AI gold rush, fueling boomtowns in tech hubs and reshaping local economies. Yet, as with every frontier rush, the real test comes when the veins run dry. Will communities that staked their claims emerge wealthier, or face the ghost-town fate of those left sifting yesterday’s tailings? As Congress, courts, and citizens weigh in, the question remains: in this 90-point gold rush, who finds riches, and who pays the toll?
Mastercard Unveils Whitepaper to Power Responsible AI Adoption in Africa
Artificial Intelligence (AI) continues to reshape the digital landscape, solidifying its role as a transformative enabler, enhancing task efficiency and empowering self-learners to rapidly and diversely absorb complex information.
Demonstrating strong conviction in the potential of AI, Mastercard has launched its whitepaper titled “Harnessing the Transformative Power of AI”, a Pan-African study exploring the continent’s readiness, opportunities, and roadmap for responsible AI adoption.
The whitepaper highlights the transformative power AI holds across key sectors such as agriculture, health, education, energy, and finance. Yet, to truly unlock this potential, AI must be deployed responsibly and inclusively. It must be guided by ethical frameworks and supported by meaningful infrastructure, robust policies, and local engagement.
According to Mark Elliott, Division President for Africa at Mastercard, the company envisions an AI future rooted in local realities, one that drives inclusive growth and expands access to opportunity. He emphasizes that for Africa to harness the full potential of AI during its current super cycle, investments must be made in infrastructure, data systems, talent pipelines, and classrooms. These investments, he notes, must be both intentional and sustainable.
The AI market in Africa is expected to grow from USD 4.5 billion in 2025 to USD 16.5 billion by 2030, according to a recent report by Statista. This sharp growth projection makes a compelling case for urgent, multi-stakeholder collaboration and forward-thinking investment.
Mastercard’s whitepaper explores AI’s potential to positively impact digital infrastructure, policy and governance, research and development, local language processing, and broader innovation across the continent. Africa’s unique demographics, mobile-first infrastructure, and entrepreneurial energy position it not just as a participant, but as a co-architect of the global AI future.
Greg Ulrich, Mastercard’s Chief AI and Data Officer, stressed the importance of trust in AI adoption. “AI is only as powerful as the trust behind it. At Mastercard, we are committed to building AI that is responsible, inclusive, and focused on delivering value to our customers, partners, and employees,” he said. “This isn’t just innovation, it’s innovation with integrity.”
Harnessing the full potential of AI, particularly in Africa, is expected to play a significant role in accelerating financial inclusion and driving digital and economic development. Mastercard’s whitepaper includes insights from leading African technologists, policymakers, academics, and entrepreneurs. It draws on interviews with organizations such as UNESCO, the African Center for Economic Transformation, and fintech leaders from across the region.
Ambassador Philip Thigo, Kenya’s Special Envoy on Technology, noted that the Kenyan government has already integrated AI in over 26 state departments as part of its broader digital transformation agenda.
He encouraged stronger collaboration between governments and private sector players in technology, emphasizing that partnerships should be clear, strategic, and focused on shared outcomes.

Ambassador Philip Thigo, Kenya’s Special Envoy on Technology
Ambassador Thigo also commended Mastercard for its consistent innovation and customer-focused solutions, particularly those tailored to meet diverse user preferences.
In a plenary session involving stakeholders from health, innovation, the private sector, and government, AI was framed not only as an enabler but also as an emerging utility.
Renowned innovator Tonee Ndungu highlighted AI’s decades-long presence, stating that while the concept has existed since the 1950s and 60s, recent developments have stirred new debates, largely because of the “artificial” aspect.

Renowned innovator Tonee Ndungu
“In the early days, few believed AI could become the utility it is today. Those who fail to keep pace may dismiss it as a bubble, but it’s not. Just like electricity, AI is becoming a utility, one with force,” he said.
Dr. Jean Kyula is the Country Manager for Kenya at Helium Health, shared how AI is revolutionizing healthcare, describing it as “more than an assistant, an answer.” She illustrated how AI allows medical professionals to diagnose conditions simply by analyzing a patient’s image and receiving symptom assessments or disease predictions.
“This is a real transformation, especially in a country like Kenya, where over 70% of specialists are concentrated in urban areas serving less than 30% of the population,” Dr. Kyula noted. “Those in rural and marginalized regions often lack access to quality care. AI is starting to change that.”

Dr. Jean Kyula is the Country Manager for Kenya at Helium Health
She emphasized the need for expanded public-private partnerships and capacity-building efforts to ensure that even remote communities can understand and benefit from AI technologies.
Addressing concerns about job displacement, experts at the discussion urged the public to view AI not as a threat but as an opportunity to enhance efficiency. They emphasized that the current AI super cycle should be seen as a tool for empowerment, encouraging especially young people to pursue self-learning, adopt new skill sets, and remain agile in the face of evolving technologies.
How AI is Helping East African Banks Navigate a Digital Crossroads
Kenya, often hailed as a continental leader in mobile banking and digital financial inclusion, has seen an explosion in digital transactions in recent years.
According to the Central Bank of Kenya (CBK), the value of mobile money transfers reached KSh 7.95 trillion in 2023. While this marked only a modest increase from 2022, it came amid challenging macroeconomic conditions and a hike in excise duty on transfers.
The region is becoming a target for sophisticated fraud, money laundering, and terrorism financing schemes that can slip through the cracks of conventional anti-money laundering (AML) systems. According to global watchdogs and local compliance experts, East African banks remain vulnerable due to fragmented data, siloed systems, and limited visibility across complex, multi-country networks.
“Financial crime doesn’t stop at borders. But legacy systems often do,” says one Nairobi-based compliance officer at a tier-one bank. “To protect customers and reputations, we need tools that can see the whole picture, not just fragments.”
Enter AI: A New Kind of Watchdog
To meet this growing threat, I&M Group PLC has announced a partnership with ThetaRay, an Israel- and US-based AI firm that provides advanced transaction monitoring and AML tools to some of the world’s top banks and fintechs. The company’s solution, already deployed at institutions such as Santander, Mashreq Bank, Onafriq, and ClearBank, uses “unsupervised” machine learning to detect anomalies and suspicious patterns without relying solely on pre-programmed rules.
“In today’s financial environment, you need technology that can adapt in real time,” said I&M Group CEO Gul Khan. “This AI platform gives us a scalable, intelligent solution to monitor billions of data points and identify risk proactively, without compromising the customer experience.”
But integration has outpaced harmonization. Each country still operates its own set of regulatory rules and oversight structures, making regional compliance a puzzle. Meanwhile, fraudsters are using this regulatory patchwork to move illicit funds across borders with increasing sophistication.
ThetaRay’s platform also supports real-time alerts and decision-making, which is essential in a market like Kenya, where digital payment volumes are vast and fast.
The Trust Factor
But the adoption of AI is not just about regulatory compliance, it is also about maintaining trust. In recent years, East African consumers have become more financially literate and digitally savvy. They expect secure, seamless services, and are quick to lose confidence in institutions that fall short.
“We’re not just talking about stopping money laundering,” said ThetaRay CEO Peter Reynolds. “We’re talking about enabling financial institutions to operate with integrity, detect emerging threats, and support financial inclusion without compromise.”
While exact cyber-fraud loss figures remain unverified for 2023, CBK officials and commercial bank compliance teams agree the trend is concerning. And as banks race to digitize their services, the attack surface is only expanding.
Still, success depends not just on the technology but on how it is used. Training, integration, and regulatory cooperation are essential to ensure that innovation leads to real impact.
In an age where creativity can be summoned with a keystroke and distributed at the speed of light, the music industry stands on the cusp of its most radical reinvention yet — one not driven by guitars or genres, but by generative code. The old muse, once flesh and blood, now hums with electricity. And the results are both thrilling and unsettling.
Artificial intelligence, that once-silent partner in digital production, is stepping confidently into the studio — not just as an assistant, but increasingly as a collaborator, composer, and, in some cases, performer. The implications for artistry, economics, and ethics are profound.
Nowhere is this more apparent than in the explosive rise of AI-powered platforms like Suno and Udio, which allow users to generate entire songs — lyrics, vocals, and instrumentals — from a single text prompt. In minutes, an idea becomes a track.
A teenager in Nairobi or Manchester can now command the equivalent of a professional studio, virtual band, and sound engineer from their laptop. The barriers to entry haven’t just lowered — they’ve evaporated.
Yet this is no passing trend. Consider AIVA (Artificial Intelligence Virtual Artist), a platform designed not for pop bangers, but for composing emotive, cinematic scores.
AIVA analyses centuries of classical music to generate symphonies, waltzes, and soundtracks that would make Eric Wanaina raise an eyebrow. It’s already being used in film, advertising, and gaming — industries hungry for affordable, mood-rich music on demand.
Then there’s MusicFX by Google, an experimental tool allowing users to generate instrumental music using simple text inputs. Its real innovation lies not just in producing pleasant loops, but in providing fine-tuned control over duration, looping, and style — a sign of how AI is becoming more responsive to artistic nuance, not merely functional output.
Together, these tools are reconfiguring what it means to create. For many young musicians, the studio is no longer a place — it’s an interface. The DAW (Digital Audio Workstation) still holds ground, but increasingly, producers are turning to AI for beat ideation, vocal synthesis, chord progressions, and even marketing advice. Whether through Soundraw, Boomy, or Amper, AI is present at every stage of the pipeline — from the spark of creativity to the final mix.
Even in the British music scene — long defined by cultural grit and genre-defining rebellion — AI is finding its rhythm. Bedroom producers are blending pop music with AI-generated orchestras. Ambient artists are feeding neural networks with field recordings. Labels are using data analytics to forecast trends and Artist & Repertoire reps now watch TikTok and AI dashboards with equal attention.
But for every breakthrough, there’s a moral counterpoint.
Who owns the output of AI? Is a song generated by AIVA truly yours if you didn’t pen the melody or record the vocals? Does MusicFX dilute the authenticity of musical performance or expand the palette of sonic possibility? And when algorithms start to learn from copyrighted music — as they inevitably do — are they creating, copying, or stealing?
The courts have yet to catch up. In the meantime, creators are left to navigate an ethical grey area, where inspiration, imitation, and innovation collide. It’s a landscape that echoes the early days of sampling — only now, the sample is the world’s entire musical history, processed and recombined in milliseconds.
Yet amid the disruption, there is promise. AI is empowering disabled musicians, allowing them to create through speech, gesture, or code. It’s enabling the creation of hyper-personalised music therapy. And it’s giving unheard voices — in regions without access to formal training or equipment — a megaphone to the world.
AI is not replacing musicians. It is replacing monotony. It is automating the generic, the formulaic, the 300th trap beat with identical hi-hats. It challenges us not by stealing creativity, but by demanding more of it. In this new age, originality must be louder. More daring. More human.
The soul of music still belongs to the people — the creators and the listeners. AI may write in key, but it cannot write from heartbreak. It may master dynamics, but it cannot master feeling. The great songs still need flaws, tension, story — elements no algorithm can yet convincingly fake.
As we continue this great sonic experiment, we must remember: the tools have changed, but the mission has not. Music is still about connection — about making others feel what you feel. Whether strummed on strings or summoned by prompts, that essence remains untouched.
The revolution is here. The machines are in the studio. But the beat — at least for now — is still ours to set. I still believe the future sounds best when humanity holds the mic — even if AI is adjusting the reverb.
Nick Thiong’o is the Executive Director of www.concept-vault.com, a creative technology hub exploring the future of storytelling, music and digital innovation across Africa and beyond.