<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Riding the wave or risking the crash: navigating the AI bubble

The Rabbit R1: to say that the reality failed to live up to the hype would be an understatement
4 minute read
The Rabbit R1: to say that the reality failed to live up to the hype would be an understatement

From Google's new AI Overview suggesting people eat rocks to Humane putting its company up for sale following the disastrous launch of its AI Pin, the news surrounding AI does not always inspire confidence.

AI is everywhere you look—it's the reason your phone suggests cat videos when you’re thinking about them or why your smart fridge knows you're out of ice cream before you do. 

In the relentless race of technological advancement, artificial intelligence (AI) stands as the new frontier. Promising to transform industries and everyday life for billions, startups and conglomerates are heralding an AI-driven future. Yet, behind this digital gold rush lies a darker reality—Wall Street and venture capital firms pushing for rapid returns, often at the expense of genuine innovation for consumers. 

Are we on the brink of a groundbreaking AI revolution or teetering on the edge of an investment bubble primed to burst?

Not all AI is created equal

The AI industry is experiencing a massive surge in investment, with Wall Street firms and venture capitalists pouring billions into the sector. Sequoia Capital was an early investor in OpenAI, giving $1.2 billion in backing for its cutting-edge research and development. Already an existing investor in OpenAI, Microsoft has pledged to invest nearly $3 Billion in Japan’s AI industry. Elon Musk recently raised $6 billion in funding for his OpenAI competitor, xAI. When such large amounts of money are being deployed, we must ask ourselves an important question: is this unprecedented amount of investment a reflection of a promising product or mere hype? 

The first discrepancy we must discuss is whether something is genuinely AI or simply labeled AI for promotion purposes. Genuine AI products are systems that autonomously perform tasks typically requiring human intelligence. They rely on machine learning, natural language processing, computer vision, and other advanced technologies to function. And they’ve been in development for decades. This refers to your self-driving cars, preference algorithms on streaming services, and customer support chatbots.

On the flip side, many purported AI products rely on significant human labor, particularly for tasks that current AI technology can’t handle. And most of the time, these workers aren’t compensated very well. Companies might market their products as AI-driven while relying heavily on human workers behind the scenes to perform tasks like data input, image recognition, and content moderation. 

For instance, Amazon's "Just Walk Out" technology, promoted as a revolutionary AI-based checkout system, actually relies heavily on human workers in India. Despite the technology's claims of using computer vision to allow customers to bypass traditional checkouts, approximately 700 out of every 1,000 transactions are manually reviewed by human workers.

Similarly, Presto Automation, a company providing AI-powered drive-thru technology to fast food chains like Del Taco, Hardee's, and Carl's Jr., recently revealed that over 70% of its orders actually require human assistance. This reliance on human workers contrasts with their earlier claims that 95% of orders were processed without staff intervention.

In a sense, these “advances” in AI highlight a systemic problem where investment hype around AI often derails the development of genuine AI products, favoring the use of AI as a marketing buzzword to attract customers and investors.

AI Hardware: Pressure to Deliver 

AI hardware is rapidly advancing, with AI assistants emerging as the leading product class. Start-ups and venture capitalists are heavily investing in AI-powered assistant products, envisioning them as potential successors to the iPhone. These AI assistants promise to revolutionize daily interactions by offering seamless, intuitive support across various tasks. However, despite the considerable potential and excitement surrounding these products, their execution often falls short. 

Take the Humane AI pin, for example. The Humane AI Pin had a disastrous launch, primarily due to its lack of functionality and a steep price point that deterred potential buyers. The product, which aimed to be a cutting-edge AI assistant, failed to deliver on its promises, leaving users unimpressed and frustrated. Only two months post-release, the company’s founders are already exploring the possibility of selling the business. This swift move toward a sale reflects the risks of overhyping a product without ensuring it meets consumer expectations.

Then, there’s the infamous Rabbit R1. This device was first pitched as a more affordable AI agent and positioned as a high-powered alternative to traditional AI assistants. Unlike many AI products, which are Large Language Models (LLMs) capable of text-based communication (ChatGPT) but not executing commands, Rabbit sought to introduce the concept of a Large Action Model (LAM). This model promised to perform real-world actions like calling a taxi or ordering food. Additionally, Rabbit R1 boasted a unique feature called “Learn Mode,” designed for users to teach the AI assistant to perform specific tasks, making it a versatile and practical tool for everyday use. But the reality was from expectations. 

Not unlike the Humane AI pin, Rabbit R1 launched as an incomplete product riddled with bugs and limited features. Users can only connect to a few apps, and the basic search function often produces inaccurate results. Upon closer inspection, it turns out that the hardware is essentially powered by an Android app, leading many to question whether Rabbit R1 could have simply been an app instead of a standalone device. This revelation has sparked criticism and disappointment among early adopters, who expected a more polished and capable AI assistant. 

So, although the company promises continuous updates and the eventual release of all the initially promised features, the current iteration of the product is almost unusable. 

These products exemplify how VC-backed, hype-driven items are often launched prematurely to be first to market. Even the major players can fall foul of this, witness Google’s much-reported problems involving AI Overview in its latest overhaul of Search. 

Industry Leaders 

AI industry leaders are no strangers to controversy either. OpenAI, the world's leading AI company, has seen its ChatGPT model become wildly popular, driving unprecedented growth. However, this success has not been without scrutiny. The rapid expansion and influence of OpenAI have raised questions about its technologies' ethical implications and societal impact. A big piece of this controversy surrounds OpenAI’s convoluted governance structure. 

OpenAI's unusual corporate governance structure, which combines non-profit and for-profit entities, has generated significant confusion and internal conflict. The core issue revolves around balancing safety concerns with profitability, leading to numerous controversies. This ongoing struggle has overshadowed the company's rapid growth and technological advancements, highlighting the challenges of maintaining ethical standards while pursuing financial success.

Final Thoughts

The potential of AI technology is immense, promising to revolutionize our lives in countless ways. However, the drive for profitability, fueled by venture capitalists and Wall Street, often overshadows consumer interests. 

To navigate these challenges, AI companies must prioritize transparency, conduct external audits, and uphold strong ethical standards. Founders must adopt a balanced approach, focusing on technological integrity and consumer well-being over short-term financial gains. By doing so, they can ensure that AI development progresses responsibly, benefiting society while maintaining trust and accountability.

Tags: AI