AI is the future, but is this the future we want?

https://images.csmonitor.com/csm/2025/03/0324%20CDATACENTERS%20aerial%20LEDE.jpg?alias=standard_1200x800

Edited by Emilia Grabowski, Jordan Collinson, Owen Andrews, and Sarah Ahmad

On March 10, 2000, the Nasdaq index reached a peak of 5,048 amid a wild frenzy of spending and investment into internet start-ups. Between 1995 and 2000, the Nasdaq stock exchange boomed from under 1,000 to above 5,000. Investors recklessly poured massive sums of money into internet start-ups that had little probability of turning a profit. Many, however, did so anyway, both out of fear of missing out on the seemingly endless opportunities the internet offered and a genuine—if misguided—belief in the potential of this new technology. Yet less than two years later, the Nasdaq hit a nadir of 1,139, a decline of nearly 77%. Fueled by speculation known as the ‘dot-com bubble,’ the crash offers a cautionary tale of what can happen when expectations of technology exceed its actual value. The dot-com bubble is noteworthy, however, because it is the best comparison for the current buzz surrounding AI. Both the dot-com bubble and AI involve enormous sums of money being invested, as well as a restructuring of the economy and stock market around companies involved in a new, trendy technology. But lost within all this talk about whether the United States is in an AI bubble is a rather simple question: who, exactly, does the AI boom benefit? The effect AI has on the everyday person must be a primary consideration. The United States’ infatuation with the prospect that AI will yield unprecedented productivity gains masks a simple truth—the influx of AI into daily life harms both consumers and workers alike.

Much of the hype surrounding AI is centered on its transformative potential. The American government focuses on the bump AI gives to GDP growth and technological innovation, while tech companies promise AI will massively boost the productivity of workers and make their jobs easier. The result is some truly astounding numbers: 75% of all gains in the S&P 500 are from AI stocks; Nvidia, a leading AI chip developer, recently became the first company in the world to surpass a 5 trillion dollar valuation; 92% of GDP growth during the first half of 2025 is a result of AI expenditures. But some of these astounding numbers warrant caution. The 'Buffett Indicator' is a metric used to gauge whether the market is undervalued or overvalued by comparing the total value of the U.S. stock market to GDP. While a ratio of 100% suggests a market is fairly valued, anything significantly higher signals a bubble. At the height of the dot-com frenzy in 2000, this indicator reached an alarming 150%. Today, however, that figure has surged to an all-time high of 230%—a clear warning that market valuations don’t match economic reality.

While investment continues to skyrocket, there is still little evidence that AI will boost productivity to a level that justifies this spending. A recent report from McKinsey found that approximately 80% of companies using generative AI observe no tangible impact on their total operating profit. Additionally, while ChatGPT-5 was released nearly two years after ChatGPT-4, many users report the old model to be better and easier to use. Currently, AI adds little value to the economy itself, and the economic growth attributed to AI is predicated on an idealized version of what AI could be five years from now, rather than what it is currently. When this AI bubble inevitably crashes, an economic depression (or, at the very least, a downturn) will likely follow. The ones who will bear the brunt of this are everyday people, not the enormous AI companies or the government. Even back in 2008, when the housing market collapsed and many large corporations and banks did go bankrupt as a result of their own financial ineptitude, the American government passed the Emergency Economic Stabilization Act of 2008 to bail out these banks while doing comparatively little to help the ordinary American. 

Moreover, a large amount of resources is being funneled into the development of the AI sector, often to the detriment of the average citizen. Data centers large enough to be seen from orbit are being built all over the United States next to small towns, where they suck enormous amounts of water and energy. Large data centers are projected to consume between 16 and 33 billion gallons of water annually by 2028, equal to the water consumption of about 300,000 American homes for an entire year. Residents living near data center complexes report problems accessing clean and consistent drinking water as a result. In addition, an average data center uses as much electricity as 100,000 households—a figure that is only projected to rise as companies build increasingly large data centers. Homes and businesses in some towns face $4.3 billion in additional energy costs due to the power consumption of nearby data centers. 

This problem is not unique to the United States. Ireland offers a cautionary tale on the dangers of data centers and their insatiable demand for energy and water. There, data centers consume more electricity than all urban homes in the country combined. The strain on the power grid has been so intense that Ireland’s grid operator has halted construction of new data centers in the country for fear of rolling blackouts. These side effects highlight an assumption often lost in the hype around AI: that tech companies will look out for the good of the worker or consumer, when time and again, history shows this is not true.

Essentially, AI is a double-edged sword: either it lives up to its immense promise and exponentially boosts worker productivity (in which case millions of jobs will be lost), or it doesn’t, and all this spending and investment is unjustified, leading to an economic depression. In short, there is no guarantee that productivity gains from AI will benefit workers, and any efficiency gains created will lead to joblessness, even among high-skilled workers. One only has to look at the job market for recent computer science majors to understand how AI is already shifting the roles and expectations of workers and creating structural unemployment.

There are several ways, however, in which the United States government can address this issue and prevent AI from becoming a scourge on the economy and society in general. First, the United States must do more to prevent large corporations from building enormous data centers that act as black holes of electricity and water next to urban areas. This past July, President Donald Trump issued an executive order titled "Accelerating Federal Permitting of Data Center Infrastructure,” which explicitly makes data centers a priority for the United States and thus enables large and resource-intensive data centers. The United States should instead impose stricter regulations designed to cut down on environmental pollution and address citizens’ concerns over electricity and water consumption. Making data centers more sustainable and less damaging to local communities is the first step toward amplifying the benefits of AI and reducing its drawbacks. Second, the United States should restore funding and expertise to those systems meant to protect the consumer, specifically the Consumer Financial Protection Bureau (CFPB). The CFPB is an agency established in the wake of the 2008 economic crisis, designed to protect consumers from predatory and abusive practices by large corporations and banks. Weakening of the CFPB leaves the consumer dangerously exposed, as a strong bureau is crucial for protecting Americans from emerging AI threats, such as discriminatory algorithms in credit scoring.

In February, the Trump administration fired Rohit Chopra, the director of the CFPB, and replaced him with Russell Vought, a co-author of Project 2025 and Trump ally. At the start of his term, President Trump laid off 1,400 of the 1,600 staff that worked at the CFPB, but these layoffs have been challenged in court and have been temporarily paused. Additionally, the Trump administration also cut the bureau’s funding cap by nearly half, crippling its effectiveness. The United States should minimize the potential for AI disruption of the workforce by creating programs to educate and train workers on how to work with AI, instead of against it. If workers learn to use AI to augment their work, rather than compete against it, the potential threat AI poses to the job market diminishes. With these specific and clear objectives, the United States can mitigate the harm AI poses and ensure its benefits are broadly shared. 

Ultimately, just as the United States emerged from the dot-com crash to see the internet fundamentally reshape society, the current AI boom will likely have to survive its own correction. One key lesson from the 2000 dot-com bubble is that the public ultimately pays the price for unchecked speculation, often through a severe economic downturn. Speculation may be unavoidable, but widespread consumer and worker harm is not.