This Week in AI: More capable AI is coming, but will its benefits be evenly distributed?


Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

The AI news cycle didn’t slow down much this holiday season. Between OpenAI’s 12 days of “shipmas” and DeepSeek’s major model release on Christmas Day, blink and you’d miss some new development.

And it’s not slowing down now. On Sunday, OpenAI CEO Sam Altman said in a post on his personal blog that he thinks OpenAI knows how to build artificial general intelligence (AGI) and is beginning to turn its aim to superintelligence.

AGI is a nebulous term, but OpenAI has its own definition: “highly autonomous systems that outperform humans at most economically valuable work.” As for superintelligence, which Altman understands to be a step beyond AGI, he said in the blog post that it could “massively accelerate” innovation well beyond what humans are capable of achieving on their own.

“[OpenAI continues] to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes,” Altman wrote.

Altman — like OpenAI rival Anthropic’s CEO, Dario Amodei — is of the optimistic belief that AGI and superintelligence will lead to wealth and prosperity for all. But assuming AGI and superintelligence are even feasible without new technical breakthroughs, how can we be sure they’ll benefit everyone?

A recent concerning data point is a study flagged by Wharton professor Ethan Mollick on X early this month. Researchers from the National University of Singapore, University of Rochester, and Tsinghua University investigated the impact of OpenAI’s AI-powered chatbot, ChatGPT, on freelancers across different labor markets.

The study identified an economic “AI inflection point” for different job types. Before the inflection point, AI boosted freelancer earnings. For example, web developers saw a ~65% increase. But after the inflection point, AI began replacing freelancers. Translators saw an approximate 30% drop.

The study suggests that once AI starts replacing a job, it doesn’t reverse course. And that should concern all of us if more capable AI is indeed on the horizon.

Altman wrote in his post that he’s “pretty confident” that “everyone” will see the importance of “maximizing broad benefit and empowerment” in the age of AGI — and superintelligence. But what if he’s wrong? What if AGI and superintelligence arrive, and only corporations have something to show for it?

The result won’t be a better world, but more of the same inequality. And if that’s AI’s legacy, it’ll be a deeply depressing one.

News

Employees at the computers and giant robot illustration
Image Credits:Moor Studio / Getty Images

Silicon Valley stifles doom: Technologists have been ringing alarm bells for years about the potential for AI to cause catastrophic damage. But in 2024, those warning calls were drowned out.

OpenAI losing money: OpenAI CEO Sam Altman said that the company is currently losing money on its $200-per-month ChatGPT Pro plan because people are using it more than the company expected.

Record generative AI funding: Investments in generative AI, which encompasses a range of AI-powered apps, tools, and services to generate text, images, videos, speech, music, and more, reached new heights last year.

Microsoft ups data center spending: Microsoft has earmarked $80 billion in fiscal 2025 to build data centers designed to handle AI workloads.

Grok 3 MIA: xAI’s next-gen AI model, Grok 3, didn’t arrive on time, adding to a trend of flagship models that missed their promised launch windows.

Research paper of the week

AI might make a lot of mistakes. But it can also supercharge experts in their work.

At least, that’s the finding of a team of researchers hailing from the University of Chicago and MIT. In a new study, they suggest that investors who use OpenAI’s GPT-4o to summarize earnings calls realize higher returns than those who don’t.

The researchers recruited investors and had GPT-4o give them AI summaries aligned with their investing expertise. Sophisticated investors got more technical AI-generated notes, while novices got simpler ones.

The more experienced investors saw a 9.6% improvement in their one-year returns after using GPT-4o, while the less experienced investors saw a 1.7% boost. That’s not too shabby for AI-human collaboration, I’d say.

Model of the week

METAGENE-1
METAGENE-1’s performance on various benchmarks.Image Credits:Prime Intellect

Prime Intellect, a startup building infrastructure for decentralized AI system training, has released an AI model that it claims can help detect pathogens.

The model, called METAGENE-1, was trained on a dataset of over 1.5 trillion DNA and RNA base pairs sequenced from human wastewater samples. Created in partnership with the University of Southern California and SecureBio’s Nucleic Acid Observatory, METAGENE-1 can be used for various metagenomic applications, Prime Intellect said, like studying organisms.

“METAGENE-1 achieves state-of-the-art performance across various genomic benchmarks and new evaluations focused on human-pathogen detection,” Prime Intellect wrote in a series of posts on X. “After pretraining, this model is designed to aid in tasks in the areas of biosurveillance, pandemic monitoring, and pathogen detection.”

Grab bag

In response to legal action from major music publishers, Anthropic has agreed to maintain guardrails preventing its AI-powered chatbot, Claude, from sharing copyrighted song lyrics.

Labels, including Universal Music Group, Concord Music Group, and ABKCO, sued Anthropic in 2023, accusing the startup of copyright infringement for training its AI systems on lyrics from at least 500 songs. The suit hasn’t been resolved, but for the time being, Anthropic has agreed to stop Claude from providing lyrics to songs owned by the publishers and creating new song lyrics based on the copyrighted material.

“We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use,” Anthropic said in a statement.

Similar Posts