The Head of Artificial Intelligence leads the Group's end-to-end AI strategy, delivery, and governance across Generative AI, Copilot AI (productivity AI), and Agentic AI (autonomous and tool-using agents). The role is accountable for value creation, safe-by-design engineering, and regulatory compliance across all jurisdictions in which the Group operates. This leader also serves as the CDO's primary counterpart for the Data & AI Governance Control Toweroperationalizing policies, standards, risk controls, and regulatory obligations (BNM, national regulators, and international frameworks such as the EU AI Act and EU Data Act).
About the Role
Scope:
- Enterprise AI portfolio spanning Generative AI (LLMs, diffusion), Copilot AI for productivity, and Agentic AI (tool-using and workflow/decision agents).
- AI product engineering, MLOps and AIOps, evaluation, monitoring, and resilience.
- Data & AI Governance Control Tower dashboards and workflows across policy, standards, risk, compliance, and audit.
- Regulatory adherence: BNM technology risk (RMiT), PDPA (as amended), sectoral/national regulators, and international references (EU AI Act, EU Data Act).
- Change management, literacy, and adoption across Group business units and corporate functions.
Responsibilities
- Strategy & Portfolio: Define a 35 year AI strategy and investment roadmap covering GenAI, Copilot AI, and Agentic AI; maintain an enterprise AI use-case pipeline with quantified business value, risks, and ROI.
- AI Product Leadership: Stand up cross-functional teams (AI product managers, data scientists, ML engineers, prompt/interaction engineers, evaluators) to ship AI products and agents that meet reliability, robustness, and safety thresholds.
- Agentic AI & Orchestration: Establish standards for agent architectures (planning, tools, memory, feedback, human-in-the-loop); implement guardrails, fail-safes, and escalation paths for autonomous actions.
- Copilot AI (Productivity AI): Drive safe enablement of Copilot-style assistants across collaboration suites; enforce identity, permissions, sensitivity labels, and DLP policies; instrument adoption, safety, and productivity metrics.
- GenAI Engineering Excellence: Govern patterns for model selection (open, hosted, and proprietary), fine-tuning/RAG, prompt design, evaluation (hallucination, bias, toxicity), cost/performance optimization, and observability.
- Data & AI Governance Control Tower: Operationalize the Control Tower (platform and processes) to manage AI model inventory/registry, lineage, risk registers, DPIAs/AI impact assessments, policy attestation, and audit trails.
- Regulatory Compliance: Translate BNM RMiT requirements (governance, technology risk, cloud consultation/notification, cybersecurity), PDPA amendments, and international obligations (EU AI Act/Data Act) into actionable controls and evidence.
- Risk Management: Run pre-deployment reviews; define go/no-go criteria; manage incidents involving AI-generated content or actions; institute post-market monitoring for AI systems and agents.
- MLOps & AIOps: Implement standardized CICD for models/agents, model versioning, feature stores, evaluation pipelines, drift detection, human-in-the-loop override, and rollback procedures.
- Security & Privacy: Enforce zero-trust principles, least privilege, data minimization, encryption, red-teaming (traditional and LLM-specific), jailbreak/prompt-injection defenses, and content provenance/watermarking where applicable.
- Ethics & Responsible AI: Embed fairness, explainability, transparency notes, and stakeholder engagement; maintain documentation (model cards, system cards, transparency notes).
- Change & Adoption: Build enterprise AI literacy programs; coach business units on use-case delivery; define citizen-developer guardrails and approval flows.
- Vendor & Partner Management: Oversee SI and platform partners; negotiate SLAs on safety, reliability, latency, uptime; ensure exit strategies and portability.
- Budget & KPIs: Manage P&L for AI portfolio; track KPIs (business impact, efficiency, reliability/safety, regulatory audit readiness, cost-to-value).
Qualifications
- Advanced degree in Computer Science, AI/ML, Data Science, or related field; or equivalent experience.
- 12+ years in AI/ML and data leadership; 4+ years delivering GenAI/LLM applications and AI agents at enterprise scale.
- Hands-on experience with model development (RAG, fine-tuning), agent frameworks, evaluation, and MLOps.
- Demonstrated delivery in regulated environments; familiarity with technology risk, privacy, and compliance obligations.
- Proven team-building and stakeholder management across business, risk, legal, security, and technology.
Required Skills
- AI Product Leadership and Portfolio Management.
- Risk-based thinking and regulatory translation into controls and evidence.
- Technical depth in LLMs/GenAI, orchestration/agents, data platforms, and cloud.
- Operational excellence in MLOps/AIOps and reliability engineering.
- Excellent communication; ability to create executive-ready materials and transparency notes.
Preferred Skills
- ISO/IEC 42001 (AI Management System) knowledge or implementation experience.
- NIST AI RMF operationalization experience.
- Cloud certifications (Azure/AWS/GCP) relevant to AI workloads.
- Privacy/security certifications (e.g., CIPP/E, CIPM, CISSP) are a plus.