This Week’s Focus: Apple’s AI Struggles and Conway’s Law
Apple’s challenges in AI—from Siri’s stagnation to underpowered models—reflect deeper organizational issues. Apple’s secretive, hardware-centric culture has hindered progress in a field that thrives on openness, collaboration, and scale. Viewed through the lens of Conway’s Law, Apple’s AI outcomes mirror its internal structure. To stay competitive, it may need to rethink its famously tight-knit model—embracing transparency, cross-team collaboration, and a more open stance toward the broader AI community.
Apple has long been celebrated for its seamless integration of hardware and software, but in the realm of artificial intelligence the company now finds itself struggling to keep pace. In recent months, Apple’s AI efforts—particularly its Siri voice assistant and new “Apple Intelligence” features—have faced delays, talent departures, and internal tensions. Many observers see a deeper pattern at work.
The Information had an interesting article about the organizational issues that plague Apple. In fact, Apple’s AI woes may be another example of Conway’s Law in action, where design and failure mirror the tech giant’s internal organizational structure and culture.
Today we look into Apple’s AI challenges through that lens, examining how the company’s famously secretive, hardware-centric, and product-focused organization has influenced (and constrained) its AI trajectory.
Let’s dive deeper.
The Struggle Behind Apple’s AI Efforts
Apple was among the first tech giants to bring AI to mainstream audiences with Siri in 2011, but a decade later, the company found itself lagging behind in generative AI, particularly following OpenAI’s launch of ChatGPT in late 2022. Initially slow to respond, Apple eventually recognized the urgency to compete, leading to the formation of a specialized Foundation Models team under AI executive John Giannandrea, and led by Ruoming Pang, a notable AI researcher. By mid-2024, Pang’s group had successfully developed new generative AI capabilities, showcased as “Apple Intelligence,” including enhanced text, image tools, and plans for an advanced conversational Siri.
Despite visible technical progress, internal strategic disagreements emerged. According to the article above, the AI researchers felt the company lacked a cohesive vision, unclear about priorities beyond merely matching competitors. Tensions peaked in early 2025 when, after successfully demonstrating an advanced conversational AI for Siri, Apple abruptly delayed its launch until 2026 without consulting the AI team. Subsequently, a reorganization separated the Siri product team—now under software leader Craig Federighi—from Pang’s AI research group.
This fragmentation highlights Apple’s broader challenges: a gap between ambitious research efforts and cautious product-driven management, resulting in delayed innovation and lowered morale.
Conway’s Law and Apple’s Organizational Structure: Product vs. Research
To understand why Apple’s AI efforts stumbled, it’s useful to recall Conway’s Law: “Organizations that design systems are constrained to produce designs which are copies of the communication structures of those organizations.” In other words, a company’s internal structure and culture will inevitably manifest in the systems and products it creates.
It's important to recognize that Conway’s Law specifically describes how an organization’s internal communication structures influence the design and architecture of the systems it produces. While the original formulation is narrowly focused on communication and system design alignment, in Apple’s case we apply the concept more broadly—using it metaphorically as a lens to understand how Apple’s deeply ingrained organizational culture and internal dynamics might have shaped, influenced, or constrained its broader strategic approach to AI. I should note that this broader interpretation extends beyond Conway’s original narrow focus on system architecture, but I believe the analogy remains insightful in revealing Apple’s current challenges.
Apple’s approach to AI offers a textbook example of this principle. Over decades, Apple’s organization has been optimized for tight integration, secrecy, and a focus on hardware-software synergy aimed at polished consumer products. These traits have deeply influenced how Apple pursued (and in some ways limited) its AI development.
One of the clearest manifestations of Conway’s Law in Apple’s AI journey is the split between its AI research group and its product development group. When Siri’s AI overhaul was delayed and re-assigned in early 2025, it exposed a rift. Federighi’s SWE division took charge of Siri with a mandate to deliver practical features, even if it meant using third-party AI tech, whereas Giannandrea’s team (including Pang’s researchers) continued focusing on building Apple’s own models in isolation.
This structural separation mirrored differences in philosophy:
Research Ambitions: Pang’s foundation models team was composed of “rock-star” AI scientists, many from Google Brain/DeepMind, who thrived on tackling big problems and publishing breakthroughs. For them, being at the cutting edge—and being recognized for it—is a key motivator.
Product Focus: Federighi’s software engineering group is famously product-driven. Their north star is enhancing Apple’s consumer experiences in the near term, with an emphasis on polish and privacy. As the article above describes, this group was skeptical of grandiose AI projects that might not translate immediately into user features. Indeed, Mark Gurman at Bloomberg noted that Federighi had been hesitant to green-light large-scale AI investments, and as late as 2023 he remained “deeply skeptical of AI.” From this viewpoint, large language models were interesting only insofar as they could help Siri or other apps do useful things like better dictation, smarter autocorrect, or richer answers—and do so in a way that met Apple’s high quality bar.
The internal shake-up of 2025 essentially put Federighi’s philosophy in the driver’s seat for Siri. After the reorganization, rumors emerged that Apple was considering a radical shift for Siri: using external AI models instead of waiting for an in-house solution. In June 2025, Bloomberg reported that Apple had held talks with OpenAI, Anthropic, and Google, asking these companies to train custom versions of their LLMs on Apple’s own cloud infrastructure for Siri. In other words, Apple was on the verge of sidelining its homegrown models in favor of proven outsiders—a drastic move to “turn around its flailing AI effort.”
This highlights how fragmentation in Apple’s org chart led to a fragmented AI strategy. The communication structures were such that the research arm and the product arm were not fully in sync on goals, timelines, or methods. True to Conway’s Law, the Siri that users got in 2024-2025 was an underwhelming assistant with a patchwork of minor AI features—arguably a reflection of Apple’s internally divided approach to developing it. As one former Apple AI team member observed, attracting and retaining top talent in AI requires a “sense of mission and north star” to unify efforts. For a time, Apple’s foundation models group had its own mission, which clearly wasn’t the company’s mission. The lack of a single, coherent vision bridging research and product ultimately slowed Apple’s progress.
One might wonder, given Apple’s long record of successful innovation within the same secretive, functional, and hardware-focused structure—including groundbreaking successes like the iPhone, Face ID, and the M1 chip—why its AI effort specifically has struggled. The critical difference may lie in the nature of AI itself. AI breakthroughs often rely on rapid experimentation, iterative model training, open community benchmarks, and substantial external collaboration. Unlike hardware design, where tight internal integration and secrecy provide clear competitive advantages, AI thrives on openness, rapid iterative cycles, and vast computing resources—conditions that Apple’s organization historically hasn’t optimized for.
In theory, Apple’s functional organizational structure should promote a cohesive, unified strategy, as all functions (engineering, design, marketing) align centrally rather than working independently in separate divisions. Indeed, Apple famously adopted a functional structure precisely to maintain tight integration and consistency across its products, ensuring a clear strategic vision driven from the top.
However, the fragmentation at Apple’s AI effort arose because functional structures can unintentionally create silos around specialized areas, especially when those areas differ significantly in goals or incentives. At Apple, the AI research team under Giannandrea was focused primarily on ambitious, long-term AI breakthroughs, while the product-focused software team under Federighi prioritized immediate consumer-facing enhancements and seamless user experiences.
This divergence in priorities and incentives created a disconnect, and instead of integrating naturally, each function began to develop its own internal objectives, standards, and success metrics. Without strong alignment at the executive level, communication gaps emerged, leading to delays and confusion around Apple’s broader AI strategy.
In other words, Apple’s fragmentation was more about insufficient strategic alignment at the leadership level, which is key for such a structure. Functional structures can and should foster cohesion, but when leadership doesn't clarify and unify competing objectives, the result can ironically be fragmentation.
Closed-Door Policy: Secrecy and the Open-Source Debate
Another organizational hallmark that affected Apple’s AI trajectory is its aversion to openness. This cultural clash became evident in a key incident: Pang’s team wanted to open-source some of Apple’s AI models in early 2025, but the idea was shot down by Apple’s top brass.
Pang and his researchers believed that open-sourcing certain models would serve two purposes: it would showcase Apple’s technical progress to the wider AI community, and it would invite external developers and scientists to help improve those models. Given how rapidly AI was advancing through shared efforts, they saw this as a way to keep Apple relevant. However, Craig Federighi strongly opposed releasing Apple’s models publicly.
According to sources, Federighi argued there were already plenty of open-source models out in the world, so Apple didn’t need to release its own to spur research. More pointedly, he was concerned that if Apple revealed its model’s details, it would become apparent just how much they had to shrink and compromise the model’s performance to make it run on iPhones—and that would make Apple look bad. In essence, Federighi preferred that Apple endure public criticism for being “behind in AI” rather than open up and confirm any weaknesses by showing an underperforming model.
For researchers, publishing and open-sourcing are lifeblood; those activities allow peer recognition and feedback, and help recruit talent who want to work on visible, impactful projects. At Apple, Pang’s team found themselves “trapped by silence,” their promising results kept in the shadows until they could be productized (which in some cases never happened). In fact, by company policy, no AI research at Apple can be shared publicly unless it’s part of a shipped product—and even then, what ships is often a watered-down version of the full research due to practical device constraints. This meant years of work by Pang’s group went unseen outside Apple. In July 2025, shortly after resigning, Pang lamented this in a bittersweet farewell on LinkedIn, highlighting a new research paper by his team about efficiently shrinking models to run on iPhones—one of the rare instances their work saw daylight.
Apple’s extreme closed-door approach stands in stark contrast to competitors: Meta (Facebook) open-sourced its LLaMA language model in early 2023, which, despite some risks, generated goodwill and outside innovation that Meta could tap into. Many AI experts believe that openness accelerates progress, as evidenced by open research communities and cross-company collaborations in AI.
In Conway’s Law terms, Apple’s ingrained communication structure led to an AI approach that cut itself off from external feedback and contribution, arguably limiting its progress compared to a more open ecosystem.
On-Device Mandate: Hardware Constraints Shape AI Design
Perhaps the most defining feature of Apple’s philosophy is its end-to-end control of hardware and emphasis on on-device processing. Apple has consistently touted privacy and efficiency benefits from running AI features locally on iPhones, iPads, and Macs rather than relying heavily on cloud servers. This philosophy, born from the company’s values and business model, significantly shaped the technical design of Apple’s AI models—and introduced severe constraints. In fact, it’s a prime example of Conway’s Law: Apple’s AI systems were designed to fit Apple’s hardware-centric model, and thus ended up mirroring the limitations of that model.
Within Apple, it was essentially non-negotiable that any AI models integral to system features must run on Apple’s own devices or at least on Apple-controlled servers (with stringent privacy measures). This was reaffirmed at WWDC 2025 when Craig Federighi highlighted “an extraordinary step forward for privacy… with private cloud compute, which extends the privacy of your iPhone into the cloud so no one else can access your data.” In practice, that means even when Apple uses cloud AI processing, it does so in a walled-off way that keeps user data opaque (even to Apple), and often Apple prefers purely on-device AI if possible. The Apple Neural Engine chips in iPhones and Apple Silicon Macs are specialized for AI tasks, and Apple has optimized many AI features (like face recognition, Siri’s speech processing, autocorrect, etc.) to run locally. For straightforward tasks, this works well and aligns with Apple’s USP of privacy. But for large-scale generative AI models, it posed a huge challenge.
By 2024, state-of-the-art language models had tens or hundreds of billions of parameters and typically ran on racks of NVIDIA GPUs in data centers. Apple’s equivalent was to try to compress and distill models enough to run on the Neural Engine or its in-house chips. According to insiders, the foundation models team did achieve “promising results in model development,” including training some models with “hundreds of billions of parameters” by early 2023. Yet when it came time to deploy these models, Apple’s policy that they must work on Apple devices (or at least Apple’s own cloud, which relies on Apple’s custom chips) meant downgrading their size and capability. Apple’s internal hardware simply could not match the raw AI horsepower of an NVIDIA H100 GPU, for example. In fact, Federighi himself privately acknowledged in an email that Apple was “forcing huge compromises to run models on its hardware.” The reports revealed that Apple’s current custom AI chips are far behind cutting-edge, and even the next-gen Apple AI chips will only about match the performance of today’s NVIDIA H100. That effectively puts Apple two steps behind in the compute race.
As a result, Apple’s AI services, like the delayed Siri upgrade, relied on scaled-down models. While Apple hasn’t disclosed specifics, Siri’s planned LLM was likely much smaller than models like GPT-4, constrained by on-device memory or at best, Apple’s less-than-ideal server chips. This trade-off may partly explain why Apple internally delayed Siri’s rollout—the quality might not have been able to meet the high bar when squeezed down to a privacy-preserving, on-device form. It certainly explains Federighi’s fear that open-sourcing Apple’s models would expose their performance gap. Apple’s insistence on a closed ecosystem created a constraint where its AI engineers were always fighting with one hand tied: they had to make models smaller and leaner than what the open AI world, running on power-hungry GPUs in the cloud, was freely exploring.
This technological constraint imposed by Apple’s hardware-first and privacy-centric approach is a direct manifestation of Conway’s Law. Because Apple’s organizational values prioritize tightly controlled hardware and privacy-focused systems, its AI teams naturally developed smaller, constrained AI models to fit within these internal boundaries, thus directly shaping—and limiting—the types of AI models and products it has been able to produce.
Consequences: Apple’s AI Future and Dependence on Others
Apple faces a critical juncture: either rapidly realign its AI strategy or risk permanent reliance on rival technology—a stark contrast for a company famous for controlling every layer of its products. Already, Apple has integrated external models like ChatGPT into its ecosystem, signaling a strategic vulnerability where its devices could become mere interfaces for other companies’ AI.
The company’s approach has also led to delayed AI features, potentially leaving it trailing behind competitors like Google and Amazon. Internally, Apple’s AI efforts face turbulence after losing key talent such as Ruoming Pang. His successor, Zhifeng Chen, must now rally a reduced and unsettled team under unchanged leadership, risking further talent departures that could weaken Apple’s AI capabilities over time.
Yet Apple still holds powerful cards, including massive resources, significant R&D spending, and an unmatched user base. Recently, the company hinted at doubling down internally while selectively integrating external AI models. If Apple can reconcile its product-driven timelines with greater research freedom and more robust computing infrastructure—perhaps even through acquisitions—it may yet overcome these challenges. But Apple’s ability to execute decisively in the next one to two years will determine whether it regains its AI leadership or remains trapped in its current cautious trajectory.
Conclusion: Breaking the Conway’s Law Curse
Apple’s struggles with AI serve as a compelling example of Conway’s Law: the company’s AI outcomes have indeed reflected its internal structure and culture. A secretive, hardware-centric, product-driven organization produced AI systems that were constrained, siloed, and late. Siri, once a trailblazer, became stagnant in part because Apple’s internal teams weren’t aligned and empowered to push it forward. The foundation models Apple developed were underpowered relative to peers, largely because they had to conform to Apple’s device-centric philosophy. And Apple’s hesitance to engage openly with the AI community left it isolated during a period of frenetic collective advancement in the field.
Yet, Conway’s Law is not a death sentence. It’s a reminder that to build great systems, sometimes you must restructure the organization or culture behind them. Just as the Australian museum in the original Conway’s Law case study broke down silos to improve its user experience, Apple might need to break or bend some of its norms to succeed in AI. This could mean embracing more transparency, fostering tighter collaboration between research and product groups, investing in infrastructure to support big models, and setting a bold unified vision that inspires its talent. In effect, Apple may have to tweak its famed formula—adding a pinch more Google/Meta-style openness and a dash more Microsoft-style partnership—to stay competitive in the AI era.
Apple has reinvented itself before when iIn the late 1990s, under Steve Jobs, it underwent a radical organizational overhaul that eventually produced the iMac, iPod, and iPhone. Today’s challenge is different, regarding software intelligence and not physical industrial design, but the principle is the same. Apple will need to align its teams, communication, and incentives such that the AI it creates can truly reflect the best of Apple, not its limitations. The coming years will reveal whether Apple can break free of Conway’s Law trap and show how it can think different, or whether Siri and its successors will remain, as one observer quipped, “a vessel for other companies’ AI.”
For a company that fiercely guards its ecosystem, there could be no sharper wake-up call.
Thank you Prof. This is a really timely article that shows even the most well-resourced companies on earth can fail to innovate if it thinks, organizes and acts like the company of yesterday. And often, it is harder for companies who are sitting on successes today (and yesterday) to make tough changes. Will share this with my organization.