This Week in AI: Musk bids for OpenAI

Micheal

This Week in AI: Musk bids for OpenAI

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

The billionaires are fighting again.

On Monday, Elon Musk, the world’s richest man, offered to buy the nonprofit that effectively governs OpenAI for $97.4 billion. In response to Musk’s offer, OpenAI CEO Sam Altman earlier Monday authored a cheeky post on X, writing, “No thank you, but we will buy Twitter for $9.74 billion if you want.” (Musk and investors famously purchased Twitter for $44 billion in 2022.)

Musk’s bid, serious or not, may complicate OpenAI’s effort to convert to a for-profit public benefit corporation within two years. Now OpenAI’s board will have to demonstrate it’s not underselling OpenAI’s nonprofit by handing the nonprofit’s assets, including IP from OpenAI’s research, to an insider (e.g., Altman) for a discount.

OpenAI could make the case that Musk’s bid is a hostile takeover attempt given that Musk and Altman aren’t the best of friends. It could also argue that Musk’s offer isn’t credible because OpenAI is already in the midst of a restructuring process. Or OpenAI could challenge Musk on whether he has the funds

In a statement Tuesday, Andy Nussbaum, outside counsel representing OpenAI’s board, said that Musk’s bid “doesn’t set a value for [OpenAI’s] nonprofit” and that the nonprofit is “not for sale.” Nussbaum added, “Respectfully, it is not up to a competitor to decide what is in the best interests of OpenAI’s mission.”

My colleague Maxwell Zeff and I wrote a more detailed piece on what to expect in the coming weeks. But guaranteed, Musk’s offer — not to mention his ongoing lawsuit against OpenAI over what he claims is fraudulent conduct — promises to make for fierce courtroom brawls.

News

Image Credits:Apple

Apple’s new robot: Apple created a research robot that takes a page from Pixar’s playbook. The company’s robotic lamp operates as a more kinetic version of a HomePod or other smart speaker. The person facing the lamp asks a query, and the robot responds in Siri’s voice.

Is AI making us dumb?: Researchers recently published a study looking at how using generative AI at work affects critical thinking skills. It found that when we rely too much on AI to think for us, we get worse at solving problems ourselves when AI fails.

AI for all, perhaps: In a new essay on his personal blog, Altman admitted that AI’s benefits may not be widely distributed — and said that OpenAI is open to “strange-sounding” ideas like a “compute budget” to “enable everyone on Earth to use a lot of AI.”

Christie’s controversy: Fine art auction house Christie’s has sold AI-generated art before. But soon it plans to hold its first show dedicated solely to works created with AI, an announcement that has been met with mixed reviews — and a petition calling for the auction’s cancellation.

Better than gold: An AI system developed by Google DeepMind, Google’s leading AI research lab, appears to have surpassed the average gold medalist in solving geometry problems in an international mathematics competition.

Research paper of the week

MIT CSAIL AI benchmark errors
Image Credits:MIT CSAIL

We know that most AI models can’t perform basic tasks reliably, like solving grade-school-level math problems. What we don’t always know is the reason behind their failures. According to a team of researchers at MIT CSAIL, erroneous benchmarks may be in part to blame.

In a new study, the MIT CSAIL researchers found that while today’s top-performing models still make genuine mistakes on popular AI benchmarks, over 50% of “model errors” are actually caused by mislabeled and ambiguous questions in those benchmarks.

“If we want to properly quantify model reliability, we need to rethink how we construct benchmarks to minimize label errors,” said one of the researchers, MIT faculty member and OpenAI staffer Aleksander Madry, in a post on X. “This is just a first step.”

Model of the week

Boring deepfakes
Image Credits:kudzueye (opens in a new window)

You’ve heard of deepfakes before. But what about deepfakes of boring everyday scenes? That’s the idea behind Boring Reality Hunyuan LoRA (Boreal-HL), a fine-tuned AI video generator that excels at creating videos of … well, pretty banal stuff.

Boreal-HL can generate clips of tourists eating ice cream, people barbecuing meat, people in lunch meetings, executives giving speeches at conferences, couples at weddings, and other mundane slices of life. This reporter finds the absurdity of the thing hilarious — particularly considering how impractical it is to run. It takes Boreal-HL at least five minutes to generate a single clip.

Grab bag

Thanks to recent breakthroughs in AI efficiency, it’s getting cheaper — and easier — to train highly sophisticated models.

In a new paper, researchers at Shanghai Jiao Tong University and an AI company called SII demonstrate that a model trained on just 817 “curated training samples” can outperform models trained on 100x more data. The team claims that their model was even able to answer certain questions it hadn’t seen during the training process, showing what they call “out of domain” capabilities.

The study follows on the heels of a Stanford-led project that found it’s possible to create an “open” model rivaling OpenAI’s o1 “reasoning” model for under $50.

Leave a Comment