Artificial intelligence firms are grappling with significant financial challenges despite rapid growth. OpenAI CEO Sam Altman revealed that his company was losing money on its $200-a-month ChatGPT Pro subscriptions, as operational costs—estimated at $700,000 daily—far exceeded revenues. With projected losses of $5 billion in 2024 against revenues of $3.7 billion, the company is struggling to sustain the high costs of running advanced AI models while maintaining user accessibility.
This highlights a broader issue in the AI industry: balancing innovation, scalability and profitability in a sector where resource consumption grows exponentially with usage, necessitating constant recalibration of business models.
The substantial costs incurred by AI companies are deeply rooted in the computational and infrastructural demands of training and deploying large-scale AI models. Training state-of-the-art models like GPT involves processing vast datasets to fine-tune billions of parameters. This process requires extensive use of high-performance hardware deployed in large clusters to manage distributed training. The computational workload requires significant energy, often measured in terawatt-hours, with estimates for training some of the largest models reaching tens of millions of dollars per iteration.
The cost of inference—servicing user queries—adds a layer of complexity. Live deployment requires robust cloud infrastructure with low-latency response times to handle millions of simultaneous interactions. This necessitates investments in scalable storage systems, data pipelines and load-balancing mechanisms alongside geographically distributed data centres.
Additional expenses arise from the need for continuous updates of these models. Companies invest heavily in algorithmic refinements such as reinforcement learning and prompt engineering to improve performance and adapt to evolving use. Guardrails for safety, fairness, and ethical compliance also require ongoing research, testing and implementation.
Data preprocessing—cleaning, labelling and augmenting datasets used for model training—constitutes a significant portion of the costs. This is often coupled with the cost of acquiring proprietary data.
To manage this ecosystem, AI companies often invest in dedicated engineering teams to oversee model deployment, troubleshoot performance issues and ensure uptime, which entails further labour and administrative costs.
So companies offering generative AI solutions have yet to identify an optimal mechanism to achieve financial sustainability without relying on external funding. Perplexity AI’s CEO Aravind Srinivas recently proposed an innovative approach for monetising AI applications. He suggested that AI agents could take a cut from the transactions they facilitate, creating a unique advertising model where vendors pay to display ads—not to users directly, but to the AI agents themselves.
This shifts the competition from vying for the user’s attention to capturing the AI agent’s attention, fundamentally changing how advertising and monetisation operate in an AI-driven ecosystem. Users would remain ad-free, while AI agents decide which vendor offerings to prioritise based on the advertising inputs they receive.
At first glance, this innovative idea sounds like a win-win. It might well be the future of marketing in an AI-driven world. But it has several problems. At its core, this approach exacerbates information asymmetry, a well-documented issue in both ethics and economics, by obfuscating the mechanisms through which AI agents prioritise decisions.
Users are excluded from understanding the transactional dynamics influencing the AI’s choices, undermining the principle of informed consent and limiting their ability to make autonomous decisions. This lack of transparency raises concerns about moral autonomy, as articulated in Kantian ethics, which emphasises the necessity of preserving individual agency.
This model commodifies the decision-making process of AI by monetising its attention. It transforms a supposedly objective process into one influenced by financial incentives. This introduces a market logic to what should ideally be a rational, unbiased process, undermining the philosophical ideal of AI as a tool for informed decision-making and fairness.
While users are spared direct exposure to ads, their autonomy in decision-making is compromised. Users rely on the AI agent’s choices without knowing the extent to which vendor-paid inputs shape these decisions. This creates a soft paternalism where choices are made on behalf of individuals in ways they cannot scrutinise.
If vendor payments influence decisions, the moral responsibility for potential harm becomes blurred. This raises philosophical questions about who is ethically accountable—the AI developers, the vendors or the system itself.
While companies will strive to monetise their AI models, achieving this goal is far more complex than it appears. Models that prioritise revenue through vendor-driven advertising or transactional cuts risk alienating users if they perceive the AI as biased. These strategies often encounter operational challenges such as ensuring transparency, avoiding monopolies and complying with regulations designed to protect consumer rights.
Successfully monetising AI requires innovative approaches that align financial incentives with user satisfaction, fairness, and long-term sustainability, making the process more about ethical and technical ingenuity than business acumen.
(Views are personal)
Aditya Sinha | Public policy professional