Redefining Leadership: AI's Potential to Replace Your CEO

The tech industry's water consumption issue is widely acknowledged. Data centers, essential in our increasingly digitalized world, demand constant cooling for optimal performance. However, the cooling process consumes vast quantities of fresh water, often drawn from local U.S. water sources. Unsurprisingly, the burgeoning AI sector, known for its high energy demands, ranks among the most water-intensive players in Silicon Valley

This thirst for water became evident in the recent release of Microsoft's latest environmental report. The report highlighted a significant surge in water consumption from 2021 to 2022, coinciding with the acceleration of the company's AI operations. Microsoft's data revealed that over a 12-month period, they consumed approximately 6,399,415 cubic meters of water, marking a substantial 30 percent increase compared to the previous year.

These findings hardly come as a shock. An earlier study, conducted by the University of California Riverside earlier this year, revealed that engaging in a brief conversation with ChatGPT consumes as much as half a liter of water—equivalent to a standard bottle. Even more disconcerting, the study estimated that Microsoft utilized roughly 700,000 liters of water to train GPT-3 over a span of two weeks. The study underscored the "highly concerning" implications of these figures, particularly in light of the fact that freshwater scarcity ranks among the most pressing challenges of our era.

Shaolei Ren, one of the study's authors, emphasized this week that AI stands out as a notably energy-intensive computing technology. Ren explained, "AI servers boast significantly higher energy density compared to other server types due to their numerous GPUs. Each AI server can consume between two to three kilowatts of power, whereas standard servers usually operate below 500 watts. This stark disparity in energy density also translates to distinct cooling requirements”

Ren suggested that tech companies have options at their disposal to minimize their water consumption during model training. However, he noted that monitoring whether companies are implementing such measures remains challenging because the majority of AI vendors do not make their relevant data publicly available.

This week, we had the privilege of conversing with Ed Zitron, a multifaceted professional who not only runs his own media relations firm but also manages a tech-focused Substack titled 'Where's Your Ed At.' Furthermore, he contributes articles to Insider. In his recent op-ed, Zitron humorously proposed the idea of replacing CEOs with AI, a notion that didn't sit well with corporate executives. We engaged in a discussion with Zitron about AI, the labor landscape, and the current challenges within corporate governance. To keep this interview concise and coherent, it has been edited for clarity and brevity.

For those who haven't had the chance to peruse your op-ed, it's undoubtedly worth a read. However, I'd like to provide you with an opportunity to outline your case briefly. So, in a nutshell, what argument are you advancing in this piece? And why do you advocate for replacing corporate executives with ChatGPT?

The primary argument I'm putting forth revolves around the evolving and somewhat nebulous nature of the CEO role. It has increasingly become a position with minimal accountability and a lack of clearly defined responsibilities. If you delve into the foundational literature regarding the CEO's role, it's surprisingly ambiguous. There was a 2018 Harvard study that examined their activities, which were summarized as 'people,' 'meetings,' and 'strategy.' But what does 'strategy' really entail? It's an incredibly broad term that can encompass virtually anything. So, it seems that CEOs often find themselves in meetings, making decisions like 'we should do this' or 'we shouldn't do that.' The quandary arises when the sole contribution of a CEO in an organization is to receive information and make such decisions. If they lack specialized skills like those of a lawyer or a doctor, it raises the question: What is the essential purpose of their role?"

What kinds of responses have you received regarding your article?

The feedback has been quite interesting. On Twitter, the reception has been largely positive, with many expressing agreement and satisfaction with the piece. However, on LinkedIn, opinions have been divided right down the middle, with a 50-50 split. It's worth noting that if you voice any critical views about corporate executives on LinkedIn, it tends to stir up strong reactions, primarily from individuals who aren't in executive positions. Interestingly, it's often men who appear to be particularly sensitive to this topic. Nonetheless, there remains a substantial number of people who are open to the idea that if a chief executive holds a vague role and doesn't actively contribute to the company's core product yet still commands a substantial salary, perhaps automation or clearer role definition and accountability measures are necessary, including the possibility of termination for underperformance.

What do you believe are the chances of companies embracing your recommendations?

To be completely candid, I see the likelihood of any company adopting these suggestions as exceedingly slim. Let me emphasize that I'm not aware of a single company currently implementing these ideas. That's precisely why I propose an alternative approach in the article: the need for actively engaged CEOs. Personally, I'm deeply involved in the day-to-day operations of my own business, and I'd venture to say I contribute more than my fair share. Besides, why would anyone choose to work under my leadership if I didn't lead by example? This is what has always baffled me about CEOs who are seemingly detached from the hands-on aspects of their organizations. It's akin to an editor who never writes, or an executive editor who merely makes decisions from a distance. Similarly, when you encounter private equity figures who acquire substantial organizations without a genuine understanding of their inner workings and proceed to make questionable decisions, that's where complications arise.

This is essentially the focal point of my Insider piece: the noticeable disconnection between executives and the tangible work produced. It's a foundational concern.

I'm also intrigued to hear your perspective on generative AI and its usage by the executive class in ways that seem detrimental to workers.

Generative AI presents an interesting paradox—it projects an illusion of intelligence but lacks true cognitive capabilities. It often resembles a high-level consultant akin to those at McKinsey, proficient at regurgitating content based on a defined dataset.

However, it lacks the depth of real-life experiences, the ability to generate innovative ideas, and the capacity for true learning and reasoning. It's akin to having a giant box of Lego bricks and attempting to construct something without genuine creativity, producing only a rough approximation of what it thinks should be there.

AI often carries an aura of mystique and is accompanied by grandiose claims about how it's poised to 'revolutionize the world.' Yet, when we strip away the hype, the core pitch to companies revolves around AI being a cost-saving tool. It offers the tantalizing prospect of automating a portion of their workforce.

This ties back to our earlier discussion. When executives and managers are detached from the actual production process or the mechanisms at play, their decision-making tends to be solely driven by cost, output, and speed metrics. This is because they lack an intimate understanding of the production process itself—what happens 'inside the machine.' They predominantly focus on what enters the pipeline, what emerges at the end, and the pace at which it all unfolds.