Sam Altman’s Vision for the Future

OpenAI CEO on progress, safety, and policy

Sam Altman

OpenAI CEO Sam Altman | PHOTOGRAPH BY JENNY JIA

“I really like things that, if they work, really matter—even if they don’t have a super high chance of working,” Sam Altman, cofounder and CEO of OpenAI, told a crowd of students who packed Memorial Church to hear him speak on May 1. He explained what drew him to artificial intelligence research when he was an undergraduate student at Stanford 20 years ago, before the topic had become the all-consuming force it is today: “It seemed like if AI could work, it would be, like, the coolest, most important, most exciting thing. And so, it was worth pursuing.”

Altman was in Cambridge for a series of events hosted by Xfund, an early-stage venture-capital firm founded at Harvard in 2011 (but that operates independently of and without funding from the University). At the Memorial Church event, David Parkes, dean of Harvard’s Paulson School of Engineering and Applied Sciences, presented Altman with the 2024 “Xfund Experiment Cup,” which is awarded to “extraordinary founders from the world’s best universities.”

The event’s moderator—managing general partner of Xfund Patrick Chung ’96, J.D.-M.B.A. ’04, who invested in Altman’s first company—gestured to the crowd of over a thousand excited students, who began lining up for the event in a locked-down Harvard Yard an hour before it started. “They look at you and they aspire, they dream to have the type of impact that you have had on the world,” Chung said. What advice would he give to himself at their age?

“I think that you can just do stuff in the world,” Altman said in response, with characteristic, deliberative pauses between his words. “You don’t need to wait, you don’t need to get permission. You can—even if you’re totally unknown in the world, with almost no resources—you can still accomplish an amazing amount.”

Some have been critical of Altman’s relentless push for progress in the face of advanced AI’s potential dangers. Altman himself signed, last year, a letter describing AI as an extinction risk for humanity; testifying before Congress, he said that “if this technology goes wrong, it can go quite wrong.” But he’s skeptical that slowing progress is the way to mitigate those threats. Working on AI 20 years ago, he had no idea it would become the technology it is today—and “even now, the most critical decisions we’re making, we’re not aware of their importance” in the moment, he said.

Progress, he continued, is unpredictable and hard to regulate. “There’s someone, somewhere, in OpenAI right now, making some phenomenally important discovery—I don’t know what it is, I just know it’s going to happen statistically—that may very much shape the future,” he said. “I totally agree, on the surface, that we should feel tremendous responsibility and get big decisions right—but you don’t always know when that’s coming.” The results of progress are what should be controlled: “Deciding to deploy GPT-5 or not, deciding where the threshold should be—we put extreme care into that,” he said.

Much of the decision-making power about how quickly to develop and deploy new AI rests in the hands of private, for-profit companies like OpenAI—and some worry about that power being concentrated among so few individuals. Those concerns spread to OpenAI’s board in November, when Altman was ousted as CEO—partly over safety concerns—before being reinstated a few days later. (Eliot University Professor and Harvard president emeritus Larry Summers was brought onto the new board in the wake of the crisis, presumably to deepen its leadership experience.)

Altman
Sam Altman speaks at a press conference before his talk. | PHOTOGRAPH BY JENNY JIA

During an interview with Harvard Magazine and other publications before his talk, Altman spoke about the challenges of developing AI in the private sector. “I think it’s exciting that [AI development] is happening in private industry,” he said, “but in a different time or different configuration of the world, it would happen in government.” Since it’s not, a “shared understanding” between government and industry is important. Now, he said, there is too big a difference between the two groups about what AI progress will look like in the coming years. In late April, Altman joined the Department of Homeland Security’s Artificial Intelligence Safety and Security Board to help facilitate these conversations.

Still, Altman said during his talk, if he could go back to 2015, when he co-founded OpenAI, he would start it as a for-profit company—not as a nonprofit, as the organization was until 2019. The development of advanced AI simply required too many resources, OpenAI’s website says, to be sustained by a nonprofit model.

With the transition to a profit-driven structure, OpenAI became closed-source: its underlying code and implementation details are not made available to the public. This makes it impossible for many in academia to study the model, and researchers have turned to workaround measures such as analyzing patterns in ChatGPT’s outputs. In response to a question from Harvard Magazine about these challenges, Altman said that OpenAI has conducted academic partnerships: “To give one example, we give academic researchers access to the GPT-4 base model weights,” he said, though there is no publicly available information about this program. “But finding ways to collaborate with and unlock academia, for lack of a better word, seems really important.”

One question some in the academy seek to address is how AI models produce their outputs. ChatGPT’s inner mechanisms are currently not understood—and though academic researchers are working on the problem, it’s difficult to do so with the closed-source ChatGPT in particular. Some believe it’s important to understand those mechanisms before developing autonomous models or applying AI in high-stakes settings like healthcare.

“We’re pursuing it. I don’t have anything specific, like, we’ve cracked it,” Altman said to Harvard Magazine about whether OpenAI is also studying the issue. But to “fully understand” how ChatGPT produces outputs might not be a feasible goal, he said: “Do I think we can understand these systems in important ways? Yes,” he said. “Would I say we’re able to understand exactly what every artificial neuron is doing? I’m not sure. But I’m not even sure that’s the right question to focus on.”

Meanwhile, the profit motive continues to drive progress. After Altman’s conversation in Memorial Church, he heard pitches from eight teams of Harvard and MIT students—pre-selected from almost a thousand submissions—as part of a competition to win a $100,000 investment from Xfund.

Updated May 6 to clarify that Xfund is operated and funded independently of the University.

Read more articles by: Nina Pasquini

You might also like

Law Professor Rebecca Tushnet on Who Gets to Keep the Ring

Harvard law professor gets into the details of romantic legal reform.

Faculty Senate Debate Continued

Harvard professors highlight governance concerns.

When to Arrest Protesters

Should civil disobedience merit a police response?

Most popular

Michelle Yeoh’s Three Tips for Success

Oscar-winning actress offers advice in Harvard Law School Class Day address.

The Deadliest War

Drew Faust speaks on how the Civil War’s astounding death toll reshaped American society.

Who Built the Pyramids?

Not slaves. Archaeologist Mark Lehner, digging deeper, discovers a city of privileged workers.

Advertisement

More to explore

Dominica’s “Bouyon” Star

Musician “Shelly” Alfred’s indigenous Caribbean sound

What is the Best Breakfast and Lunch in Harvard Square?

The cafés and restaurants of Harvard Square sure to impress for breakfast and lunch.

Harvard Portraitist Nina Skov Jensen Paints Celebrities and Princesses

Nina Skov Jensen ’25, portraitist for collectors and the princess of Denmark.