BFSI Institutions
Banks, NBFCs, and insurance companies subject to RBI AI guidance, SEBI scrutiny, and regulatory examination of credit AI and fraud detection systems.
A structured six-month advisory programme equipping Indian and multinational boards with the governance architecture, accountability structures, and regulatory readiness to govern artificial intelligence — with evidence.
The Governance Gap
AI is making consequential decisions across credit, fraud detection, customer service, and operations. Your board is accountable for these decisions — to your regulator, your auditor, your institutional investors. But the governance architecture to discharge that accountability has not been built.
This is not negligence. The frameworks are new. The regulatory expectations are hardening faster than boards can respond. The programme exists to close that gap — systematically, with evidence, in six months.
Governance accountability for AI is disputed or absent. No one has a board-endorsed mandate to own it.
Management cannot confirm what AI systems are in production, who owns them, or what risk they carry.
When your regulator asks for AI governance documentation — EU AI Act, SEBI LODR, RBI — your board cannot produce it.
Each month delivers a specific, named governance outcome. Nothing generic. Everything calibrated to your organisation's AI systems, your regulatory exposure, and your board's specific governance context. Led personally by Aparna Kumar.
Aparna Kumar — Former CIO, State Bank of India & HSBC
Founder, Nexora Tech & Aparna Tech Trends
Aparna Kumar is not an AI governance consultant who studied the frameworks. She built AI governance from inside two of the world's most regulated financial institutions — as CIO of State Bank of India and HSBC— under actual regulatory scrutiny, with consequences.
Banks, NBFCs, and insurance companies subject to RBI AI guidance, SEBI scrutiny, and regulatory examination of credit AI and fraud detection systems.
Listed entities with SEBI LODR Risk Management Committee obligations that now encompass AI risk oversight — and institutional investors asking governance questions.
Organisations with EU-facing products, EU resident data, or multinational AI deployments creating EU AI Act extraterritorial exposure alongside DPDP Act obligations.
Hospitals, diagnostics firms, and pharmaceutical companies deploying clinical AI — where patient safety obligations, DPDP Act health data provisions, and EU AI Act high-risk classification converge.
Organisations that have received AI-related examination findings, regulatory enquiries, or investor governance questionnaires — and need to build the evidence trail urgently.
Organisations deploying GenAI in customer-facing or operational roles without the governance architecture to oversee hallucination, adversarial exposure, or agentic AI risk.
All directors briefed and able to independently challenge management AI assertions
Accountability chartered, board-endorsed, and operationally live
Every material AI system visible, classified, and named-owned
AI Risk Appetite Statement board-approved and measured quarterly
GenAI systems tested, vulnerabilities remediated, kill-switches rehearsed
Drift, bias, and hallucination dashboards operational for production AI
Internal Audit AI Coverage Plan endorsed by Audit Committee
EU AI Act gap assessment complete — SEBI, RBI, DPDP Act obligations mapped
Every client engagement is led personally by Aparna Kumar and supported by a multi-disciplinary team of domain experts. Each member brings demonstrated practitioner credentials — not advisory theory — ensuring every phase of the programme is delivered at board level.
Aparna Kumar
Programme Lead · Founder, Nexora Tech & Aparna Tech Trends
Programme Lead — Every board session, every discovery call
Aparna Kumar is one of India's most respected technology leaders, bringing 30+ years of CIO experience across India's largest public-sector bank and one of the world's leading multinational financial institutions. As CIO of State Bank of India, she led the world's first at-scale CBDC deployment during India's G20 presidency. As Country CIO of HSBC, she directed the analysis and remediation of 1,500+ applications under India's data localisation regulations and participated in the global implementation of DORA.
Aparna built AI governance under actual regulatory scrutiny — in production, at enterprise scale, with consequences. The Board AI Governance Readiness Programme embodies that practitioner experience, not advisory theory. Every board session, CXO conversation, and discovery call is led personally by Aparna. That commitment is non-delegable.
Sudiip Kumar
Senior Advisor · AI/ML Technology & Global Enterprise Delivery
AI/ML Engineering & Global Technology Delivery
A distinguished technology leader with over 30 years of experience building and scaling enterprise AI/ML platforms at global scale. Most notably, led the end-to-end development of an AI/ML-based Translation Management Platform — a multilingual AI infrastructure now serving more than 2,750 global enterprise customers across 350+ languages, spanning financial services, healthcare, legal, technology, and government sectors.
Within the Nexora programme, this advisor contributes specialist expertise to the AI architecture assessment, GenAI deployment evaluation, and vendor due diligence phases — bringing a proven track record of building AI products that operate reliably under the exacting governance, compliance, and quality standards of global enterprise environments.
Engagement Delivery Lead
Programme Execution · Working Group Facilitation
Operational backbone of every programme engagement
With 8–12 years in enterprise risk, IT governance, or compliance advisory, the Engagement Delivery Lead runs all monthly working group sessions, owns the evidence documentation trail, and manages client relationships between Nexors's board touchpoints. This professional operates with the regulatory depth to independently facilitate NIST AI RMF sessions, vendor due diligence workshops, and adversarial testing coordination — ensuring quality in every deliverable before it reaches the client.
Governance Analyst
Governance Documentation · Regulatory Research
Documentation and research engine of the programme
The Governance Analyst designs, populates, and maintains every governance deliverable across — AI Inventories, Risk Appetite Statements, NIST AI RMF reports, EU AI Act gap assessments, and Board AI Dashboards. With 3–5 years in compliance, legal advisory, or management consulting, this professional translates complex regulatory obligations into precise, board-readable documentation and produces the quarterly Regulatory Horizon Scan covering EU AI Act, DPDP Act 2023, SEBI LODR, and RBI AI guidance developments.
AI Security Specialist
Adversarial Testing · GenAI Resilience (Contracted per Engagement)
Month 4 adversarial testing on live GenAI systems
Engaged from Month 4, the AI Security Specialist conducts prompt injection and adversarial testing on the client's live GenAI systems, executes structured red team exercises, validates that kill-switch runbooks work as documented (not just that they exist), and designs deepfake detection protocols for senior leadership impersonation scenarios. All findings are delivered in a written Adversarial Test Results report — severity-rated, OWASP LLM Top 10 aligned, and structured for board reporting and regulatory audit review.
BD & Operations Associate
Pipeline Management · Marketing Operations
Keeps the engagement running with clients
The BD & Operations Associate ensures a high-touch client relationship. He manages client discussions, proposal customisation, quarterly regulatory scan distribution, and private briefing event logistics — maintaining the precision and tone required for board-level advisory practice.
Book a 30-minute discovery call with Aparna Kumar. Not a sales meeting — a direct conversation about where your board is on AI governance and what the most immediate priorities are. No obligation.
Book a Discovery Call →