The CCI’s AI Market Study: Regulation through persuasion, not prohibition
On 6 October 2025, the Competition Commission of India (CCI) released its long-awaited market study report on Artificial Intelligence (AI) and competition law (Report). The Report marks India’s first comprehensive look at how AI might reshape market dynamics - and how regulators should respond.
Across jurisdictions, competition authorities are converging on a few key risks, even as their responses diverge. The CCI’s report sits comfortably in the middle of this global debate: cautious but prudent, preferring cooperation and advocacy over hard regulation.
Where the world agrees - the risks of AI
Competition agencies around the world are aligned on one thing - AI could amplify existing market power. The CCI’s findings echo this consensus, highlighting concerns around concentration, transparency, and the potential for AI to make anti-competitive conduct more effective.
The key competition law issues identified in AI markets globally1 include:
Market concentration in AI infrastructure - A handful of global technology firms control access to essential AI inputs: data, computing power, and foundational models. Smaller players depend on these incumbents for infrastructure, deepening structural dependencies and making entry harder.
AI enabling anti-competitive conduct - AI systems could make collusion easier and more efficient - through coordinated pricing algorithms, hub-and-spoke arrangements through a shared algorithm, or through autonomous systems that “learn” and promote collusive outcomes. On the unilateral conduct side, self-preferencing, exclusive partnerships, or tying arrangements could lock users deeper into AI ecosystems, entrenching incumbents and limiting choice.
Strategic mergers under scrutiny - Globally, regulators are watching AI-related mergers closely, particularly where acquisitions might foreclose access to data, talent, or computing resources. The fear is that strategic consolidation could cement market dominance in a fast-evolving field.
The transparency problem - The “black box” nature of AI complicates oversight. Detecting collusion or exclusionary conduct is harder when algorithms are self-learning and opaque. Regulators are calling for greater auditability and explainability to bridge this gap.
Notably, the CCI adds a distinctly Indian dimension to this conversation - that AI could enable more effective price discrimination or predatory pricing, harms that could hit vulnerable consumers hardest and erode trust in AI systems. This assertion appears to be based solely on stakeholder perceptions and secondary literature detailing AI’s capacity to execute such strategies.
Where approaches diverge - how to address these risks
While regulators agree on the risks, they differ significantly on the remedies. A clear divergence has emerged: some jurisdictions have moved ahead with bespoke regulatory regimes to govern AI systems, while others are taking a more cautious approach that focuses on enforcement under existing laws, supplemented by voluntary or non-binding measures.
Proactive risk-based AI regulation
The EU’s AI Act and Digital Markets Act (DMA) form the most structured response to regulating the AI sector. The AI Act classifies systems by risk level, imposing strict compliance obligations for “high-risk” applications. The DMA’s rules, meanwhile, targets only AI firms that qualify as “gatekeepers”.
The UK’s approach is more flexible but no less proactive. Through the Digital Markets, Competition and Consumers Act, its competition authority, the Competition and Markets Authority (CMA) can designate firms with “Strategic Market Status” and impose bespoke conduct rules. The CMA’s updated market study on AI models reflects both its enhanced understanding of AI’s complexities and the rapid evolution of the AI sector.
Measured moves: AI oversight without new rules
Other regulators have adopted a more cautious and incremental approach, favouring monitoring conduct in AI markets and using existing tools to address potential issues, alongside guidance and other voluntary frameworks.
United States: Active enforcement remains the primary approach, with recent cases involving algorithmic collusion and investigations into major AI players like Microsoft, OpenAI, and NVIDIA.
Asia: Some Asian regulators are experimenting with voluntary or non-binding approaches. Singapore has introduced an AI market toolkit and a voluntary certification framework to promote responsible AI use. Japan and South Korea are similarly using soft tools, like guidelines and monitoring mechanisms to understand market dynamics before introducing hard law. In Asia, the emphasis seems to be on fostering responsible AI adoption, encouraging innovation, and driving growth, rather than implementing interventionist measures at this stage.
India’s Approach: Cooperative Compliance
While major jurisdictions are experimenting with ex-ante frameworks to preempt AI-related harms, the CCI’s market study indicates a measured stance, focusing on cooperation, advocacy, and incremental capacity building. This approach is pragmatic and realistic - it recognises that India’s role in the global AI race is still emerging, and that premature regulation could stifle innovation before competitive dynamics have fully matured.
This is evident from the CCI’s action plan flowing from the Report, which mirrors the broader Asian preference for “soft” regulation:
Self-audit and proactive compliance: The Report’s most tangible takeaway is its proposed self-audit framework for AI firms, urging companies to proactively assess their AI systems for competition law risks and calling for greater transparency, particularly where algorithms could influence market outcomes and consumer welfare.
Building public infrastructure: The study suggests that government funding for open-source technologies and non-personal data repositories would help foster the growth of AI startups and smaller businesses. This is perhaps a recognition that India’s contribution to the global AI market might be more significant in the AI application layer, rather than in the foundational physical and digital infrastructure.
Building regulatory capacity and advocacy: The market study echoes the recent calls for increased institutional capacity at the CCI - particularly to tackle digital markets. The Report indicates that the CCI will continue to engage with stakeholders, other sectoral regulators and international competition regulators.
The AI market study indicates the CCI’s alignment with the Indian government’s recognition of India’s growing AI capabilities and immense potential. From a competition standpoint, the regulatory approach appears to be focused on shaping incentives through increased stakeholder dialogue, rather than outright prohibitions. In this sense, the AI Market study serves as a statement of intent, not a regulatory blueprint. It signifies that the CCI intends to learn before legislating, by signalling expectations, establishing benchmarks, and developing expertise.