Bridging the Gap: APAC Firms Urged to Move Beyond Responsible AI Buzzwords to Concrete Action
Responsible AI is rapidly transitioning from a popular catchphrase to an urgent business necessity, particularly as companies throughout the Asia-Pacific (APAC) region begin to confront and mitigate the escalating risks associated with the artificial intelligence technologies they are increasingly implementing. However, while the concept of responsible AI is gaining significant traction, experts emphasize that substantial deficiencies persist in the practical, on-the-ground application of these critical initiatives.
In a recent discussion, Tej Randhawa, APAC’s Responsible AI Lead at Accenture, provided insights into the current landscape of responsible AI adoption, the formidable challenges organizations face, and the crucial steps needed to bridge the often-wide chasm between strategic intent and effective execution.
The APAC Paradox: High Ambition, Low Readiness for AI Risks
A striking Accenture survey recently revealed that while nearly half of companies in the APAC region perceive responsible AI as a catalyst for growth, a mere one percent are fully equipped to effectively mitigate the inherent risks of AI. This disparity is particularly evident in Southeast Asia. Although the region is making strides in its responsible AI efforts, the gap between organizational aspirations and actual operational maturity remains considerable.
“Operational maturity for responsible AI remains low across industries in Southeast Asia, despite the fact that more than three-quarters, or 78 percent, of companies have established responsible AI programs,” Randhawa noted. “This highlights the challenge of translating strategy into action.” The risks associated with AI, including bias in algorithms, the proliferation of deepfakes, AI-induced ‘hallucinations’ or factual inaccuracies, and critical privacy breaches, have seen a marked increase. These dangers, compounded by the region’s demographic diversity and robust consumer base, make it imperative for organizations to comprehensively address AI’s broader societal impact.
Beyond Programs: Prioritizing Core Elements for AI Risk Management
So, how should organizations approach the complex task of managing AI risks? Randhawa asserts that a foundational focus on prioritizing privacy, robust data governance, and stringent security measures is essential. These core elements, he argues, will not only help mitigate the multifaceted risks but also ensure that AI can be scaled responsibly across an organization.
Leading the Charge: Sectors Advancing in Responsible AI
While operational maturity presents a widespread challenge, Randhawa identified certain sectors that are spearheading responsible AI adoption. The banking sector is a notable frontrunner, largely due to its highly regulated environment and significant pre-existing investments in comprehensive risk management frameworks and controls. Government agencies and the public sector are also making significant progress, particularly in countries like Australia, where mandatory AI standards have been implemented.
“Industries serving customers at scale, such as retail, telecommunications, and consumer goods, are rapidly adopting responsible AI, especially driven by hyper-personalization and AI-driven customer engagement,” Randhawa explained. “Life sciences, particularly in R&D and clinical applications, are also witnessing accelerated AI adoption.” He anticipates that as regulatory frameworks continue to evolve, the adoption of responsible AI will accelerate across all sectors, heralding the next phase of AI maturity in the region.
Navigating the Hurdles: Challenges in Responsible AI Implementation
Organizations encounter several significant obstacles when attempting to implement responsible AI practices. Randhawa pointed to the modernization of digital core infrastructures and data platforms as major hurdles. Furthermore, companies are grappling with a fragmented and often confusing regulatory landscape, coupled with a persistent shortage of skilled AI professionals.
“Developed nations like Singapore, which already have established frameworks, are better positioned to overcome these barriers compared to emerging economies that face greater challenges in regulatory alignment and infrastructure readiness,” Randhawa added. He further clarified, “While financial concerns and ROI measurement are important, the real roadblocks often lie in organizational readiness, regulatory frameworks, and access to AI talent.”
Responsible AI as a Growth Engine, Not Just Compliance
Despite these challenges, Randhawa emphasized that responsible AI can be a powerful driver of business growth and innovation. Organizations that pioneer responsible AI practices can potentially see a significant increase in their AI-related revenue. “Responsible AI isn’t just about compliance; it’s about business success, trust, and sustainability,” he affirmed.
A Strategic Approach: Investing in Trust and Long-Term Value
For companies embarking on their responsible AI journey, Randhawa stressed the importance of investing in robust risk management, engaging in third-party audits, providing comprehensive employee training, and implementing AI-specific cybersecurity measures. “These investments mitigate risks, build trust, and ensure compliance with evolving regulatory standards,” he said. He also advised companies to frame responsible AI not merely as a compliance checkbox but as a potent strategic tool that enhances brand reputation, ensures data privacy, strengthens compliance efforts, and optimizes operations, thereby offering a competitive edge and unlocking new market opportunities.
Organizations in Southeast Asia are increasingly recognizing this long-term value, shifting from a focus on short-term gains towards a strategy that prioritizes trust, security, and sustainability. “This shift reflects a growing understanding that responsible AI is integral to long-term success, building a foundation of trust, compliance, and operational excellence,” Randhawa observed.
Bridging the Execution Gap: Proactive Steps and Future Tech
Closing the gap between ambition and execution requires proactive measures. Key obstacles include managing human interaction risks, dealing with poor-quality training data, and the inherent difficulties in embedding fairness into AI systems. Randhawa advocates for increased investments in AI governance, the implementation of clear policies, ensuring third-party accountability, and adopting a holistic, cross-functional approach to AI responsibility.
Looking ahead, Randhawa predicts that emerging technologies such as federated learning, differential privacy, and explainable AI (XAI) will be pivotal. “Explainable AI (XAI) will transition from a research concept into a practical necessity,” he stated, anticipating that companies will integrate XAI capabilities to enhance transparency and build trust. This evolution will also see the rise of new roles like “AI ethicists” and “explainability engineers.”
For organizations just starting out, Randhawa recommended establishing a strong data foundation, embedding responsible AI principles deeply into operations, and fostering trust among employees and customers. By taking these fundamental steps, companies can pave the way for scaling AI responsibly and creating sustainable, long-term value.