In the Name of Humanity - An Anatomy of OpenAI’s Ten-Year Governance Experiment

April 2026

Article author: Crystal Yuan

Advisor: Dazhong Wu

Opening: Ninety-Six Hours

Around midday on November 17, 2023, in the San Francisco Bay Area, OpenAI’s Chief Scientist Ilya Sutskever sent Sam Altman a text message inviting him to a Google Meet video conference the following noon. Less than ten minutes into that meeting, Altman learned on camera that he had been fired. The board’s subsequent public statement was restrained in tone but grave in implication—it said Altman “was not consistently candid in his communications with the board.”[ OpenAI board statement, November 17, 2023. The “not consistently candid” phrasing became the single most-quoted sentence from the original announcement.] Microsoft, the company’s principal investor—by then having committed over ten billion dollars in capital and cloud computing resources—was reportedly given only minutes of advance notice before the announcement.[ On the timing of Microsoft’s notification, see Time magazine, November 22, 2023, “Sam Altman Returns as OpenAI CEO. Here’s How It Happened.”]

This was a rare scene in American corporate governance history: a company valued at roughly eighty billion dollars, whose product had over one hundred million monthly active users, had its co-founder and CEO removed by a six-person nonprofit board previously little known to the public, through a single video call.[ OpenAI’s approximate $80 billion valuation in October 2023, see CNBC, October 28, 2025, “OpenAI completes restructure”; the composition of the board at the time of Altman’s removal—Sutskever, Adam D’Angelo, Helen Toner, Tasha McCauley, Altman, Brockman. See Wikipedia, “Removal of Sam Altman from OpenAI” (accessed April 2026).]

Over the ninety-six hours that followed, OpenAI cycled through three CEOs. On November 18, according to later disclosures by The Guardian, Bloomberg, and others, as well as Sutskever’s sworn deposition in October 2024 in Musk v. OpenAI, the board briefly explored merging OpenAI with its competitor Anthropic that Saturday—a company founded, as it happens, by several former OpenAI employees who had left over disagreements on safety philosophy.[ See Sutskever’s sworn deposition in Musk v. Altman (N.D. Cal. 4:24-cv-04722), October 1, 2024, relevant portions unsealed in early 2026.] According to Sutskever’s testimony, when OpenAI executives warned the board that “if Altman doesn’t return, the company will collapse, and this is contrary to OpenAI’s mission,” director Helen Toner reportedly responded, in substance, that the dissolution of the company might itself be consistent with its safety mission.[ Ibid. Toner’s statement entered the record through Sutskever’s testimony; Toner herself has not publicly responded to the specific wording.]

The turning point came from the employees. By Sunday evening, the slogan “OpenAI is nothing without its people” began appearing in coordinated posts across social platforms. By Monday morning, of roughly 770 employees, about 500 had signed an open letter to the board demanding collective resignation and Altman’s reinstatement, failing which they would resign en masse and join Microsoft—which had by then publicly offered to employ any OpenAI staff member at equivalent compensation in a newly established AI unit. By Monday evening the number of signatories had grown to 738, roughly 95% of all employees. More dramatically, the signatories included Ilya Sutskever himself—the very Chief Scientist who had personally led Altman’s removal.[ Dynamics of the employee joint letter: see NPR, November 20, 2023, “Hundreds of OpenAI workers threaten to leave”; Time magazine, November 22, 2023; and HubSpot’s November 2023 compiled timeline. The 770-employee headcount and 738-signatory figures come from NPR’s contemporaneous coverage.]

Late on November 22, Altman returned. Of the six original board members, five were out; only Adam D’Angelo remained. The new board chair was Bret Taylor, former co-CEO of Salesforce; former Treasury Secretary Larry Summers joined. Microsoft received a non-voting observer seat.[ New board composition: see Time magazine, November 22, 2023. Bret Taylor as chair, Summers joining, D’Angelo retained.]

If one reads these ninety-six hours merely as a senior-executive personnel dispute, the case is wasted. What it truly reveals is a peculiar fact in corporate law: the nonprofit board that held the highest authority under OpenAI’s charter exercised the full legal power it had been given—and discovered that this power was no match for the organization’s real power structure. And this happened less than eight years after OpenAI was established as a nonprofit “to ensure that artificial general intelligence benefits all of humanity.”

This essay attempts to tell that evolution clearly—not as a chronicle of the AI industry, but as a governance study: how an organization whose fiduciary duty runs to “all humanity,” whose structural innovation centers on “capital serving mission,” completed in less than ten years something close to the opposite of its founding premise. The structural predicaments this process exposes carry more direct lessons for NGOs—especially those currently debating hybrid structures, scale expansion, and capital partnerships—than they do for the AI industry itself.

I. An Organization Contradictory by Design (2015–2018)

OpenAI was founded in December 2015 by a group of more than ten, including Altman, Elon Musk, Greg Brockman, and Ilya Sutskever, registered as a nonprofit in the state of Delaware. The founding press release announced a $1 billion funding commitment, with backers including Musk, Altman, Brockman, Peter Thiel, Reid Hoffman, Jessica Livingston, as well as AWS, Infosys, and YC Research.[ OpenAI founding announcement: see TechCrunch, December 11, 2015, and OpenAI official press release, December 11, 2015.]

But that $1 billion was, from the start, a check whose rhetorical significance far exceeded its financial reality. According to the company’s own disclosures, by 2019 the actual funds received from these commitments amounted to approximately $130 million; Musk’s cumulative donations, by OpenAI’s official account, totaled under $45 million, with Musk himself stating in later litigation that the figure was approximately $38 million.[ On the gap between commitments and received funds: Wikipedia, “OpenAI” entry (accessed April 2026), citing OpenAI’s disclosures to regulators, approximately $130 million received by 2019. Musk’s actual contribution: OpenAI’s official account states “under $45 million”; Musk’s litigation statement puts it at approximately $38 million. See CNBC, April 7, 2026, and Let’s Data Science’s March 2026 case summary of “Musk v. Altman.”] That is to say, during the first four years of its nonprofit existence, OpenAI’s real capital base was roughly equivalent to that of a mid-sized foundation—while the entity it sought to catch up with, Google DeepMind, was backed by a company then worth more than a trillion dollars.

In April 2018, OpenAI released its Charter, establishing a set of principles. The most legally striking sentence reads: “Our primary fiduciary duty is to humanity.”[ OpenAI Charter, original text at openai.com/charter (published April 9, 2018).] This is a clause without meaningful precedent in corporate law history. Traditional nonprofit boards owe fiduciary duty to “the public interest” or to specific “charitable purposes”; pointing the object directly at “humanity” has almost no antecedent in Anglo-American trust law.

Here lies the first deep fissure in OpenAI’s governance architecture: “humanity” does not constitute a justiciable subject under law. Nonprofit oversight in California and Delaware is vested in the respective state attorneys general—and the attorneys general represent the public of their state, not “humanity.” Board members who deviate from mission can, in theory, be pursued by the attorneys general, but only if specific harm to the state’s charitable trust can be proven. The “duty to humanity” is grand on paper, but the judicial toolbox contains almost no wrench for it. It is a soothing rhetorical clause; its binding force depends almost entirely on the directors’ conscience and external public pressure.

That same year, Musk left the board, the official reason being that his electric vehicle company Tesla’s AI investments in autonomous driving could constitute a conflict of interest. Another backdrop: according to emails disclosed later, Musk had tried to persuade the board to let Tesla acquire OpenAI, without success, and on the eve of his departure warned the co-founders in an email that if the organization was moving “toward a tech startup rather than a nonprofit,” he would cease funding. Altman’s reply the next day had a single core message: that he remained committed to the nonprofit structure.[ On Musk’s 2017 email threat to cease funding and Altman’s reply: see CNBC, December 11, 2025, “Altman and Musk launched OpenAI as a nonprofit 10 years ago.” The relevant emails were submitted as evidence in Musk v. Altman.] This correspondence was made public in the evidence disclosure of Musk’s 2024 suit against OpenAI, becoming one of the most pivotal pieces of documentary evidence in the entire litigation.

These three years (2015–2018) are often described as OpenAI’s “idealist phase.” But rather than saying the nonprofit identity in this phase was pure idealism, it is more accurate to say that it carried two inherently incompatible functions: internally, the nonprofit identity was a recruiting brand—drawing researchers who refused to work for Google or Facebook and inviting them to believe they were working for humanity rather than shareholders; externally, the nonprofit identity was a regulatory shield—it allowed the company to lobby policymakers, obtain partnerships, and sidestep the commercial scrutiny typically aimed at large AI labs, all while presenting itself as a public-interest research institution.

These two functions did not develop their contradiction during later commercialization. They were embedded in the design from day one. An organization whose banner was “benefit all humanity” simultaneously needed to recruit the world’s most expensive talent, purchase DGX servers at $300,000 apiece, and pay compensation that competed with the largest technology firms—this seam was not one that opened “by accident” during development, but a fault line built into the foundation itself.

II. The 2019 Invention: A Legal Anatomy of the Capped-Profit Structure

By the end of 2018, this fault line could no longer be ignored. Training costs for AI models were rising at nearly tenfold per year, while philanthropic channels could not scale at that rate. According to later disclosures, Altman had tried to raise $100 million in pure nonprofit form but “wasn’t getting enough money in fast enough”; continued fundraising was not a viable path.[Altman’s statements about funding difficulties: in multiple public interviews in 2023 (including with Lex Fridman and Bloomberg).]

In March 2019, OpenAI announced a structural innovation that would be repeatedly dissected thereafter: under the nonprofit parent OpenAI, Inc., it created OpenAI LP, a “capped-profit” limited partnership. The parent, acting as general partner (GP), controlled the LP; investors entered as limited partners (LPs), with their returns subject to a ceiling—first-round investors could receive up to 100x their principal, with the cap declining in later rounds.[OpenAI LP structure and the 100x return cap: see OpenAI’s March 11, 2019 blog post “OpenAI LP” and Time magazine, October 28, 2025, “An OpenAI Timeline.”]

The elegance of this structure lay in its apparent simultaneous resolution of three problems. First, it opened a channel for capital—investors could expect returns, only subject to a ceiling. Second, it preserved mission priority—the LP’s operating agreement explicitly stated that the company might never be profitable, and advised potential investors to participate “in the spirit of a donation.”[ On the LP operating agreement’s “in the spirit of a donation” caution: see Northwestern University Law School Professor Jill Horwitz’s comments to Time, October 28, 2025.] Third, it preserved nonprofit governance—the GP (i.e., the nonprofit parent) held absolute control over the LP, and board members remained legally accountable to “humanity,” not to capital providers.

In public discourse, the capped-profit model was widely celebrated as “a new architecture for taming capital.” It attracted Microsoft’s initial $1 billion investment in July 2019, followed by an additional $10 billion commitment in early 2023.[On Microsoft’s 2019 and 2023 investment scale: see joint OpenAI-Microsoft announcements and CNBC, October 28, 2025. Cumulative Microsoft investment exceeds $13 billion.] But when we compare the structure’s legal text with its actual subsequent operation, three critical weaknesses emerge.

First: the actual effectiveness of the return cap is questionable. A 100x return for early investors is, within any foreseeable time horizon, essentially a nominal ceiling—reaching it would mean OpenAI attaining a trillion-dollar valuation and sustained profitability. More significantly, according to independent reporting by The Information and The Economist, OpenAI quietly amended the relevant terms around 2023 to permit the return cap to grow 20% annually starting in 2025. This adjustment was never proactively disclosed by OpenAI; at 20% annual growth the nominal cap roughly doubles every five years, expanding more than thousandfold over forty years.[ On the 20% annual return-cap growth amendment: first reported by The Economist and The Information, later systematized by openaifiles.org (2025). This amendment was not proactively disclosed by OpenAI.] The ceiling that once stood as the emblem of “taming capital” has, at the institutional level, been hollowed out by a technical revision most of the public has never known about.

Second: nonprofit control is legally absolute but operationally fragile. The parent’s control over the LP rests on the assumption that the board is “independent and competent to exercise oversight.” But the board itself is self-perpetuating—sitting directors select new directors—and its size hovers between six and nine members. This means one or two director changes can significantly alter the internal balance of power, while the appointment process lacks any external check.

Third: employee incentives are structurally misaligned with the mission. Rather than traditional stock, OpenAI employees hold Profit Participation Units (PPUs) tied to LP profits. PPU value tracks the company’s valuation—which was $14 billion in a 2021 employee share sale, leapt to $300 billion in an April 2025 funding round led by SoftBank, and was formally set at $500 billion following an October 2025 secondary-market employee sale, surpassing SpaceX to become the world’s most valuable startup.[Valuation trajectory: $14 billion in 2021 (Pitchbook); approximately $80 billion in November 2023; $150 billion in October 2024; $300 billion with SoftBank-led round in April 2025; $500 billion in secondary-market valuation in October 2025. See Bloomberg, October 2, 2025, “OpenAI Completes Share Sale at Record $500 Billion Valuation.”] For early employees, this meant roughly a 35-fold nominal wealth increase in four years. It is, in effect, a “shadow incentive contract” outside the board’s purview—one that binds the economic interests of hundreds of core employees firmly to the variable of “continued valuation growth,” not to “mission preservation.”

These three weaknesses together paint the real picture of the capped-profit structure: it more closely resembles a capital-friendly governance wrapper than an arrangement with rigid protective power for the mission. The charter’s constraint mechanisms exist, but most of them appeal to directors’ moral self-discipline or external public pressure—not enforceable legal weapons. The structure was long regarded as an innovation largely because it had not been pressure-tested. Then, in November 2023, it was tested for the first time—and it failed.

III. The Real Meaning of the 2023 Crisis

The dominant narrative of those ninety-six hours focuses on “the board versus the CEO.” But that framing omits the most important fact: what determined the outcome was not the board, not the investors, not Microsoft, but the roughly 738 employees who signed the joint letter.

Legally, the board of OpenAI, Inc. held the full power to remove the CEO. It exercised that power. Microsoft held enormous economic interests but no board seat; Nadella’s role during the weekend’s phone calls and public statements was that of a pressurizer, not a decision-maker. Investors—SoftBank, Sequoia, Thrive Capital and others—had strong stakes but no formal governance rights. What ultimately forced the board to reverse its decision was an ultimatum from 95% of the staff: if you do not resign, we leave en masse.

This dynamic has almost no formal status in traditional corporate-law frameworks. In American corporate law, employees are not typically governance subjects; their interests are protected by employment contracts but do not constitute institutional checks on board decisions. But in a knowledge-intensive organization like OpenAI, a company’s assets are almost entirely equivalent to its employees’ human capital—the engineers who trained GPT-4, the core researchers, the infrastructure engineers—their collective exit means the company’s value evaporates overnight. Microsoft understood this clearly, which is why it made that critical tactical move: publicly committing to hire any OpenAI employee willing to leave. This, in the eyes of all employees, transformed “mass resignation” from a bargaining threat into a concretely executable option.

The board had no countermove available against this structure. Its legal power was real, but the exercise of that power presupposed that the organization would continue to exist under its decisions—and once employees announced collective departure, the organization itself would cease to exist, and the board’s “power” would have nothing to act upon. What emerged is a phenomenon worth dwelling on in governance studies: formal governance structures were hollowed out by informal collective employee action, and that hollowing out was public, startlingly rapid, and left almost no legal countermeasure.

One detail worth noting: Chief Scientist Sutskever, who led the effort to remove Altman, was also the one who signed the joint letter within about 72 hours and publicly expressed his “deep regret” on X.[ Sutskever’s public apology on X (formerly Twitter), November 20, 2023.] In his sworn deposition in October 2024 for the Musk litigation, he acknowledged that he had relied on materials provided by Chief Technology Officer Mira Murati but had not independently verified the core allegations.[Content of Sutskever’s testimony: see the October 1, 2024, sworn deposition in Musk v. Altman, unsealed in early 2026.] This detail adds another layer of observation on the question of the “governance subject”: in a highly technical, fast-moving organization, even core insiders sitting on the board may, at critical moments, lack the complete information and time to make prudent judgments. The “legitimacy” of a board decision faces not only external pressure but also the fragility of its internal informational basis.

The implications for mission-driven organizations go beyond the surface lesson of “handle CEO-board relations well.” What is revealed is a deeper regularity: when an organization’s assets consist primarily of talent, when that talent is highly mobile and actively courted by external capital, then no matter how the charter designs the board’s statutory powers, the organization’s actual sovereignty has already diffused to the employee layer. For many international NGOs that have scaled past a certain size, this pattern is already playing out in parallel—senior researchers, program officers, senior fundraisers often exert actual control over the organization that far exceeds what the charter describes. The November 2023 OpenAI episode merely demonstrated this regularity in 48 hours under global spotlight.

IV. From Capped-Profit to PBC: A Restructuring Still Unfinished

After 2023, OpenAI’s path had in fact already turned. In mid-2024, the company began preparing to restructure the for-profit entity from an LP to a Public Benefit Corporation (PBC). In September 2024, California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings successively initiated formal review proceedings—because OpenAI, Inc. is registered in Delaware and operates in California, and any substantial restructuring involving the transfer of charitable assets falls within both attorneys general’s oversight jurisdiction.[Investigations launched by AG Bonta (California) and AG Jennings (Delaware), respectively in September and October 2024. See Delaware DOJ, October 28, 2025, “AG Jennings completes review of OpenAI recapitalization.”]

The technical core of this review was a seemingly dry but in fact structurally decisive question: how much is the “control right” held by the nonprofit parent worth?

Under California’s charitable trust principles, all assets of a nonprofit are “irrevocably” dedicated to its charitable purpose. If these assets are transferred to a for-profit entity at too low a price, it is in essence a theft of charitable assets. California precedent is stark: in the 1990s, when Blue Cross of California converted from nonprofit to for-profit, it was required to transfer over $3 billion in stock into two independent foundations as compensation; Health Net underwent a similar mandatory compensation arrangement.[On the California Blue Cross and Health Net precedents: see the January 29, 2025, joint letter from San Francisco Foundation et al. to Bonta and Jennings, and CalMatters, October 30, 2025.]

Contrasting with this was a piece of counter-pressure during the restructuring. On February 10, 2025, a consortium led by Elon Musk sent an unexpectedly large acquisition offer to the OpenAI nonprofit parent—$97.4 billion. Four days later the board rejected it, but the figure had already been written into the bargaining table: it effectively posed a question the two attorneys general could not easily avoid—if Musk’s consortium was willing to pay nearly a hundred billion dollars for the nonprofit entity, then its “fair value” could not be set substantially below that level.[On the Musk consortium’s $97.4 billion acquisition offer: see Wikipedia, “OpenAI” (accessed April 2026), and CNBC’s February 2025 reporting. Offer extended February 10, 2025; rejected by the OpenAI board February 14.]

A coalition of more than 60 California nonprofits called “Eyes on OpenAI,” along with Washington-based Public Citizen, pressured the two attorneys general repeatedly from late 2024 through all of 2025, invoking the Blue Cross precedent to demand that OpenAI’s charitable assets be transferred to an independent foundation. Their core argument: that since November 2023, OpenAI had de facto been operating as a for-profit company, and the nonprofit parent was merely a rubber stamp for the for-profit entity.[The Eyes on OpenAI coalition and Public Citizen opposition: see Public Citizen’s September 12, 2025, letter to both attorneys general, and the “OpenAI Files” website (openaifiles.org) 2025 compilation of restructuring objections.]

The final outcome was a compromise. On October 28, 2025, OpenAI announced the restructuring’s completion: the for-profit entity was converted into a Delaware Public Benefit Corporation, formally named OpenAI Group PBC; the former nonprofit parent was renamed OpenAI Foundation and held roughly 26% of the PBC’s equity, worth approximately $130 billion at the valuation of the time; Microsoft held approximately 27%, worth roughly $135 billion; the remaining 47% was distributed among employees and other investors. If the PBC’s valuation grew tenfold over the next fifteen years, the Foundation would receive additional equity. The Foundation also retained the power to appoint PBC directors, and through a “Safety and Security Committee” held a suspension right over AI model releases.[October 28, 2025, final restructuring structure: compiled from OpenAI’s official “Our structure” page (updated October 28, 2025); CNBC same-day reporting “OpenAI completes restructure”; and Bloomberg, October 29, 2025, explainer.]

Both Bonta and Jennings issued “no-objection” statements. Bonta publicly stated that his office would “continue to closely monitor” OpenAI’s execution of its charitable mission.[ Bonta’s statement: see Time, October 28, 2025, “An OpenAI Timeline”; Jennings’ statement: see Delaware DOJ announcement, October 28, 2025.] But critics’ doubts did not dissipate—the Eyes on OpenAI coalition identified several structural weaknesses in the restructuring: the PBC board and the Foundation board were permitted to share members; the Safety Committee’s independence lacked concrete guarantees; and once the Foundation existed only as a minority shareholder in the PBC, its governance influence would depend mainly on the cooperation of majority shareholders (including Microsoft), rather than on any rigid legal control.[Criticism of the restructuring: see CalMatters, October 30, 2025, “OpenAI’s restructuring deal with California is full of holes.”]

In February 2026, researchers observed that the word “safely” had quietly been removed from OpenAI’s official mission statement.[On the removal of “safely” from OpenAI’s mission statement: see Creati.ai reporting, February 23, 2026.] A small revision of this kind may not, in itself, be decisive—but it closely resembles the 2023 quiet revision of the return-cap terms: in a document whose principal source of legitimacy is its mission statement, key phrases are being subtly and continuously adjusted, almost never accompanied by public explanation.

At the same time, OpenAI’s commercial trajectory continued to accelerate: SoftBank’s $40 billion-led funding round in April 2025 pushed the valuation to $300 billion; an October 2025 secondary-market transaction raised it to $500 billion.[2025 valuation trajectory data: compiled from Bloomberg, Crunchbase News (October 2, 2025), and Pitchbook reporting.] A reported $11 billion new financing round in February 2026 pushed valuation past $800 billion.[February 2026 funding round data: see Let’s Data Science, March 2026, citing public market information at the time.] The company simultaneously launched massive data-center expansion, establishing substantial compute-supply contracts with Oracle, Nvidia, AWS, and others.

More symbolic is the lawsuit still in progress. Musk’s suit against OpenAI, Altman, Brockman, and Microsoft—after federal Judge Yvonne Gonzalez Rogers denied OpenAI’s motion to dismiss—is scheduled to begin jury selection on April 27, 2026, in the Oakland federal district court.[Musk v. Altman trial date: see CNBC, April 7, 2026, “Elon Musk seeks ouster of OpenAI CEO Sam Altman.” Case number: 4:24-cv-04722, U.S. District Court, Northern District of California.] Original damages claimed: $134 billion. In early April 2026, Musk modified the suit: any damages awarded would be fully donated to the OpenAI nonprofit foundation, rather than to Musk personally.[Musk’s April 2026 amendment redirecting any damages awarded to the OpenAI nonprofit foundation: see Law360’s Musk v. Altman et al. case page and Brownstone Research’s April 2026 analysis.] This modification legally undercut OpenAI’s narrative that the suit was “a business competitor’s vengeance”; regardless of the case’s final outcome, the early emails, Brockman’s personal notes (“it was a lie” and similar internal records), and Nadella’s late-night text messages, all disclosed during the trial process, will be publicly scrutinized.[On the disclosure of internal evidence including Brockman’s “it was a lie” notes: see Techbuzz.ai compilation, January 16, 2026, and Judge Gonzalez Rogers’ 28-page ruling, January 15, 2026.]

One detail worth noting: what Musk hopes to achieve through the lawsuit—restoring OpenAI to “a true nonprofit”—is not entirely consistent with the signal his own earlier acquisition offer sent. But that is not a question this essay aims to adjudicate. What bears watching is that, regardless of where this case ultimately leads, the legitimacy of the entire restructuring will be re-examined in full in court, and the conclusions of that examination may carry more enduring weight than the two attorneys general’s October 2025 “no-objection” statements.

V. Four Structural Propositions — Direct Implications for NGO Governance

Taking OpenAI’s decade as a case, the easiest conclusion to draw is that “mission-driven organizations inevitably drift once they scale.” That conclusion is not wrong, but it is too general, and it cannot guide any actual governance design. The following four propositions are sharper and more operational—they are the structural lessons OpenAI’s story offers to NGOs, especially those currently debating charter reform, scale expansion, hybrid structures, and capital partnerships. Each proposition is grounded in concrete cases and attempts to move from abstract argument toward specific institutional recommendations.

Proposition One: The vaguer the beneficiary, the lower the cost of mission drift.

OpenAI’s “fiduciary duty to humanity” is a textbook example. It is rhetorically powerful and institutionally nearly toothless—because “humanity” cannot sue the board. Any board resolution can be argued as “consistent with humanity’s interests” and equally argued as “contrary to humanity’s interests,” and neither argument triggers any enforceable judicial process.

The lesson for NGOs is not merely “specify the beneficiary more concretely.” The deeper meaning is: the scope of beneficiary definition in the mission statement determines the cost of mission breach. The broader the beneficiary (all humanity, the public, vulnerable populations, future generations), the fewer subjects have standing to sue; the fewer subjects with standing, the lower the cost of mission drift.

The American Red Cross’s post-2010 Haiti earthquake response is the canonical Western case of this problem. A 2015 joint investigation by ProPublica and NPR revealed that despite raising nearly $500 million for Haiti relief, the organization could show only a small fraction of the intended housing construction. The formal accountability mechanisms—state attorney general oversight, IRS Form 990 disclosure, GAAP-compliant audit reports—did not detect this gap; it surfaced only when independent investigative journalism forced the disclosure of internal memos. Congressional hearings followed; internal reforms were announced; but none of the original accountability channels that had been written into the Red Cross’s governance framework had autonomously triggered.[On the American Red Cross’s Haiti earthquake response and subsequent investigations: see ProPublica and NPR’s joint investigation, June 3, 2015, “How the Red Cross Raised Half a Billion Dollars for Haiti and Built Six Homes”; subsequent Senate inquiry led by Senator Charles Grassley, 2015–2016; and the Red Cross’s own 2016 reform announcements on transparency enhancements.] The problem was structurally similar to OpenAI’s: the charter’s “responsibility to the public” clauses existed in form but were not substantively used by any actor before an external crisis forced the issue.

The subsequent reforms at the Red Cross—enhanced public disclosure, dedicated transparency officers, third-party monitoring of major disaster-response funds—were essentially building, outside the charter, an external mechanism through which “mission breach could be discovered and pursued.” The necessity of that post-hoc construction is precisely the evidence that the original charter’s “responsibility to the public” clauses were operating in a void.

The implication for NGO charter design is two-layered. The first layer is at the level of phrasing—avoid expressions like “the public,” “society,” “those in need,” which lack definitional specificity, and define beneficiaries as concretely as possible along geographic, demographic, and identity dimensions, so that accountability subjects can be located. The second layer is at the institutional level—designate at least one “mission guardian” role explicitly in the charter and governance rules, whether an independent supervisory board, a donor-representative assembly, or a dedicated ethics committee. The critical feature is that it must have concrete powers: to examine documents, to question management, to disclose externally—not exist only in name. If the question of “who has standing to pursue mission breach” is not answered at the charter-design stage, the entire mission-lock mechanism is a fiction.

Proposition Two: Mission-driven organizations have a scale ceiling, beyond which fidelity almost inevitably declines.

OpenAI’s eight-year evolution illustrates an intuitive but deliberately ignored regularity: when a mission-driven organization’s scale—whether in headcount, budget size, or external dependency—exceeds a certain critical point, mission fidelity enters an irreversible downward trajectory. This is not because leaders become wicked, but because the “bandwidth” of governance tools has a ceiling. A six-person board meeting quarterly cannot exercise substantive oversight over an organization with thousands of employees, hundreds of ongoing projects, and a multi-billion-dollar budget.

This regularity has clear precedents in international NGO history. In 2018, Oxfam’s sexual-exploitation scandal during the 2011 Haiti earthquake response—involving among others the then-country director for Haiti—was exposed by The Times of London. The core finding of the subsequent independent inquiry: Oxfam’s scale expansion (operations in nearly 100 countries, staff in the tens of thousands) had far exceeded the actual bandwidth of its headquarters’ compliance, accountability, and cultural-governance functions, producing de facto “compliance blind spots” in regional and country offices facing local power asymmetries.[On the Oxfam Haiti scandal and UK Charity Commission investigation: see Charity Commission, June 11, 2019, “Inquiry report: Oxfam.” The investigation lasted approximately 14 months.] The UK Charity Commission’s 2019 investigative report determined that Oxfam’s governance failure was not an aberration of individual behavior but “systemic.” This finding subsequently drove the reconstruction of the UK development sector’s accountability frameworks, including FCDO’s approach to partner vetting.

A second, closely related precedent: Wikimedia Foundation’s internal controversy during 2014–2016. As the Foundation’s annual budget crossed $50 million, a series of internal revolts—over the launch and rollout of the “Knowledge Engine” search project, over leadership changes, over conflicts of interest in board appointments—exposed the gap between governance structures designed for a much smaller nonprofit and the operational reality of a globally distributed platform with hundreds of staff and hundreds of millions of users.[On the Wikimedia Foundation’s 2014–2016 governance controversies: see the Foundation’s own post-Tretikov transition documentation, community discussions on Meta-Wiki, and BuzzFeed News and Wired reporting on the Knowledge Engine controversy.] Executive Director Lila Tretikov resigned in early 2016. The Foundation’s board subsequently overhauled its executive selection process and community-governance interface. The pattern: scale had outpaced governance design, and the correction came through internal revolt rather than through planned adaptation.

This is not to say NGOs should not grow. It is to say that scale expansion must be accompanied by pre-planned expansion of governance tools—including: scaling board size and professional specialization roughly linearly with budget; establishing a dedicated compliance officer reporting directly to the supervisory body (not the executive team); building structured beneficiary-feedback channels; introducing periodic external third-party governance evaluations. Otherwise scale itself will consume the mission. OpenAI, before its growth past the ten-thousand-employee and hundred-billion-dollar-valuation thresholds, had a charter that every employee read repeatedly; after, it gradually became a page in the PR materials. The manager of any mission-driven organization must remain alert to whether that same process is unfolding in their own organization—the “reading frequency” of the charter and the curve of organizational scale often diverge, and this divergence is itself an early indicator worth tracking.

Proposition Three: The employee layer is becoming the invisible governance subject of mission-driven organizations.

The most critical observation from those ninety-six hours in November 2023: what determined the outcome was not the statutory power of six directors, but the collective action of about 738 employees. This is not unique to OpenAI; it is the general logic of knowledge-intensive organizations—when an organization’s core assets are human cognitive capital rather than fixed capital, the collective-exit threat of employees constitutes a substantive constraint on management and board.

For NGOs, this phenomenon has long been underestimated. Over the past decade, NGO organizational size in many mature ecosystems has scaled from fewer than a dozen to hundreds, with senior researchers, program officers, and senior fundraisers constituting the de facto organizational core. An observable phenomenon: horizontal mobility among senior NGO and social-enterprise professionals has intensified in recent years—a foundation’s senior program officer leaving to become executive director at another foundation, former international NGO country-office researchers taking over domestic organizations, substantial flow between the nonprofit sector and commercial ESG consulting, and so on. The density of this mobility network means that the core employees of any given organization, if their mission-related judgments diverge from leadership on a sustained basis, now have far more exit paths available than a decade ago.

This structural fact has two implications for NGO governance. One is risk: if an organization’s “soul” exists only in the tacit consensus of a few core employees—if the charter is written vaguely, if the mission is not sufficiently textualized, if daily operations depend on individual judgment—then these employees’ staying or leaving directly determines whether the mission continues. An executive director’s resignation may take three to five core colleagues with them; even a capable successor needs one to two years to rebuild mission consensus.

The other is resource: in environments where charter design is relatively weak and external accountability channels are obstructed, internal employee culture may in fact be the most effective check on mission drift. An organization that builds an internal culture of “speaking up promptly about mission deviation,” and gives that speech-up structured, safe channels, can partially compensate for the inadequacies of formal governance tools through the stability of employee consensus.

Specific governance designs should consider several directions. First, formally incorporate core employee representatives into the governance structure—for instance, employee-elected directors (non-voting or with limited voting rights), and annual staff meetings with non-binding consultation on strategic direction. This is no longer novel in international NGOs: Oxfam’s 2019 governance reforms introduced exactly such employee-representation mechanisms. Second, establish clear “mission-grievance channels”—allowing employees to raise concerns about perceived mission drift through compliant internal paths, rather than resorting to the extremes of collective resignation or media leaks. Third, embed the charter’s key clauses—particularly beneficiary definitions and mission-constraint mechanisms—explicitly in employee handbooks and onboarding materials, so that the charter shifts from “words hanging on the wall” to an actual reference in daily work. OpenAI proved this regularity’s existence at almost the most expensive cost possible in 2023; mission-driven organizations need not pay the same tuition again.

Proposition Four: Hybrid structures are a double-edged sword—design details determine whether they are innovation or fig leaf.

OpenAI’s 2019 innovation and its 2025 PBC transition both belong to a broader question: how should mission-driven organizations coexist with commercial capital. This question is of direct relevance to the NGO sector globally—social-enterprise registration paths are refining; “public-interest-plus-commerce” partnership models are becoming more frequent; the replacement funding structures that have emerged amid shifts in cross-border philanthropic flows are still being explored. Beyond OpenAI, two contrasting cases merit parallel examination.

The first is Mozilla. In 2003, the Mozilla Foundation was registered as a 501(c)(3) nonprofit in California; in 2005, it formed Mozilla Corporation as a wholly owned subsidiary (100% equity held by the Foundation), responsible for Firefox development and commercial partnerships. On the surface this structure resembles OpenAI’s 2019 design—a nonprofit parent controlling a commercial subsidiary. But in several critical details the two are fundamentally different. Mozilla Corporation has no external shareholders, issues no stock options, pays no dividends; all profits must flow back into Mozilla’s projects themselves. The Foundation’s full equity ownership of the subsidiary also means there is no “investor return cap” clause requiring repeated negotiation and potentially being quietly amended.[ Mozilla Foundation and Mozilla Corporation structure: see Mozilla’s official “Organizations” page, Wikipedia entries for “Mozilla Foundation” and “Mozilla Corporation,” and Mozilla Foundation annual audit reports (2021). Its key declarations—no external shareholders, no stock options, no dividends, profits reinvested in Mozilla projects—see Mozilla Corporation’s founding announcement (August 3, 2005).] This design has operated for more than twenty years; despite Mozilla Corporation experiencing layoffs and Firefox market-share decline and other commercial pressures, its structural binding with the Foundation’s mission has never loosened.

What does the difference between the two tell us? Once a commercial subsidiary introduces external investors, establishes tradeable equity or quasi-equity instruments, the governance tension between it and the mission-driven parent irreversibly increases. OpenAI’s PPUs (Profit Participation Units) already created an independent-from-mission incentive vector at the employee level; Mozilla Corporation has no such instrument, so that tension does not exist. For an NGO considering a commercial subsidiary—whether a social enterprise, a consulting firm, or a technology platform—this is not a technical question but a foundational one: once external equity is introduced, all subsequent governance tools will be retroactively constrained by that initial decision.

The second contrast is JustGiving. Founded in 2001 in the UK, JustGiving became the country’s largest online giving platform, positioning itself in the social-enterprise space for most of its first decade and a half. In 2017, it was sold for £95 million to Blackbaud, a US-listed for-profit software company.[On the JustGiving acquisition by Blackbaud: see Third Sector, October 2017, “Blackbaud acquires JustGiving”; UK Charity Commission statements on the transaction; and subsequent analysis by Civil Society Media, 2017–2018.] The sale provoked substantial controversy in the UK charity sector because JustGiving had accumulated vast donor data and charity-partnership relationships under a charity-adjacent brand; its transfer to for-profit ownership raised questions that the UK Charity Commission and DCMS ultimately examined formally: what happens when the trust built under social-mission framing becomes a commercial asset?

The JustGiving case illuminates, from the opposite direction, the market value of the “charity appearance” in a commercial context. When a company can, through a charity-like interface, reach several million users’ giving behavior, and simultaneously convert that trust into precision-targeted marketing leads for commercial products, the “public-interest attribute” itself becomes an economic asset. That asset can be sold, transferred, and re-monetized—and the donors who generated the underlying trust have no legal standing over how it is used afterward. OpenAI’s charter clause about investors participating “in the spirit of a donation” sounds romantic, but its execution depends entirely on investor self-discipline—which is an unreliable assumption for any real-world capital partner.

Combining Mozilla’s positive precedent, JustGiving’s cautionary case, and OpenAI’s failure, a minimum institutional defense checklist for NGOs considering hybrid structures can be distilled:

1.If a commercial subsidiary must be established, prioritize wholly-owned structures over those introducing external shareholders; if external shareholders must be introduced, explicitly prohibit stock options, profit participation, performance-based equity instruments from reaching the employee level, preventing employee incentives from decoupling from the mission vector.

2.Specify at the charter and shareholders’-agreement level the enforceable path for the “mission-priority” clause—not “the board shall consider mission,” but “if a board resolution is determined to conflict with the mission, that resolution is automatically void,” accompanied by concrete triggering mechanisms and independent review procedures.

3.Commercial subsidiaries and nonprofit parents must establish substantive separation in personnel, finance, and brand—particularly that core decision-making positions must not be held concurrently across entities.

4.All capital partnership agreements should include pre-set exit clauses specifying how assets, brand ownership, and historical data will be divided if mission conflict becomes irreconcilable.

5.The above clauses should be transparently disclosed in public organizational documents, not treated as internal agreements.

None of these is a silver bullet. But each directly corresponds to a specific hole exposed during OpenAI’s decade-long governance experiment. Extracting lessons from others’ tuition is one of the most mundane and most practical functions of NGO research.

Conclusion

OpenAI’s decade, viewed from one angle, is a story of “mission gradually corroded by capital”; viewed from another, it is humanity’s first large-scale stress test, at the multi-hundred-billion-dollar scale, of the “mission-driven organization” as an institutional form. It used real money, real personnel changes, and real court evidence to expose every joint of this ancient organizational form—the rhetoric versus enforceability of charters, the statutory power versus informational basis of boards, employee interest versus mission resonance, the rhythm of capital versus the bandwidth of governance tools—under broad daylight.

For an NGO researcher, OpenAI’s value lies not in providing a template to copy. In fact, both the capped-profit structure and the PBC transition were, at their respective moments, celebrated as institutional innovations, and each showed profound internal contradictions in the subsequent evolution. What is genuinely worth learning from is the detailed “minefield map” this case provides—which structural designs look elegant but shatter under pressure; which charter clauses are grand in writing but lack substantive force in court; which governance tools appear sufficient at small scale but fail once the organization crosses a certain threshold.

If this experiment has, as of 2026, one question still unsettled, it is this: will it ultimately be recorded as “a valuable governance experiment” or as “a high-cost mission betrayal”? The answer may depend on what the still-being-built OpenAI Foundation—holding $130 billion in equity—actually accomplishes over the next fifteen years, and on how far it is willing and able to go when the interests of PBC shareholders conflict with the obligations of mission.

Whatever the answer, the governance literature this experiment has left behind is already rich enough. It deserves to be treated by every manager, director, and researcher of a mission-driven organization with a seriousness beyond that applied to AI industry news. This is the core value of this essay as a piece of NGO-perspective research—we are not studying an AI company; we are studying the fate of the entire organizational form within which we ourselves operate.

Next
Next

Advancing Mongolia's Renewable Energy Transition