AI model safety vs. economic competition: can both coexist?
Key policy priorities inform the open-source vs. closed-source LLM debate, and there isn't a clear winner.
When legendary venture capital investor Bill Gurley took the stage at the 2023 All-In Summit, he cautioned technologists to be wary of regulatory capture by incumbent players in AI. Gurley expressed concern that the biggest and most well-resourced AI companies are unfairly spreading “a negative open-source message, and I think it’s precisely because they know it’s their biggest threat,” he said. Gurley’s point raises several important questions:
Can AI safety and robust market competition coexist, and if so, how?
Does the AI policy agenda in Washington face a similar fate to the financial services and oil and gas industries?
What might fair, independent-minded, technically-savvy federal (and int’l) AI oversight look like, without necessarily stifling competition or innovation?
As AI researchers, developers, investors, and policymakers race to figure out a paradigm shift in technology that’s Frankenstein-level unpredictable, several common threads are emerging in the ethics, governance, and policy conversations. One worth highlighting—is there a tension between economic competition and model safety in AI?
Developers and startups can build apps that rely on foundation models such as GPT-4, Claude 3, Llama 3, and Gemini—it’s a great way to build a vertical- or use case-specific product that leverages an exponentially vaster dataset and knowledge base.
New academic research makes the case that foundation models are innovation commons — a dynamic fourth space outside firms, markets, and governments, where information and knowledge are the key resource to be distributed, as opposed to equipment or capital goods. Think of innovation commons as the precursor to startup markets—it’s hubs of research and development (R&D) that lay the groundwork for new products and services.
Should developer access to foundation models (via API licenses) be viewed as gates — and gatekeepers — to market competition? That’s a question that antitrust and innovation scholars Thibault Schrepel and Jason Potts probe in their May 2024 working paper, “Measuring the Openness of AI Foundation Models: Competition and Policy Implications.”
Schrepel and Potts create a novel framework for comparing the openness of some of the largest foundation models on the market. Ultimately, they don’t find that it’s as simple as, the more open-source-leaning models are across the board more open than closed-source (proprietary) models, or that models from Big Tech companies are on the whole less open than models from smaller companies. It depends on the criteria. But they do find that most foundation models are relatively closed, and they suggest that closed-source models pose much greater anticompetitive risks than open-source models.
So, what’s the argument in favor of closed-source models?
Perhaps it’s a difference in messaging. For AI safety researchers like the team at Anthropic, it’s an order of prioritization: first figure out how to develop safe and steerable AI systems; then in the deployment phase, sort out questions about building a fair and competitive ecosystem.
Realistically, competition has already begun, including from scores of developers and firms not nearly as concerned with AI harm mitigation as organizations like Anthropic. But in the absence of clear government safety standards, especially in the U.S. where the impact of President Biden’s catch-all executive order is uncertain, foundation model research labs can differentiate themselves with clear self-imposed policy frameworks.
That’s what Anthropic did with its Responsible Scaling Policy (RSP) — a 2023 document outlining the company’s safety protocols to prevent hazards of increasing magnitude, from interference with democratic institutions and civil society up to large-scale destruction, like “thousands of deaths or hundreds of billions of dollars in damage.”
Given the potential for societal harms and destruction, Anthropic makes a case for prioritizing closed-source model development. One of its flagship LLMs in the wrong hands today likely wouldn’t result in a catastrophic outcome, but several years out, the risk substantially increases. The case for safeguarding the nuts and bolts of one of the most powerful foundation models on the market isn’t chiefly to stymie competition—it’s to build AI responsibly and clear it for public use.
Ultimately, the issue isn’t cut-and-dried. Open-source model developers can expect to answer questions about model safety—how are they able to ensure user data privacy, and that they have safeguards from malicious use cases? Closed-source model developers can expect to answer questions about model transparency and ecosystem flexibility—can LLM application builders trust that they’ll be able to customize their tools to their liking, and retain a fair share of any profits? And can developers and the wider public straightforwardly understand any key biases and limitations in the datasets used to train popular proprietary foundation models, such that they find them trustworthy enough to use and evangelize?