The New Default. Your hub for building smart, fast, and sustainable AI software
From AI Futures to Human-Centered Systems
The first article on The New Default's Speakers set the stage for a broader conversation about how technology, leadership, and innovation are being reshaped under new pressures: AI acceleration, shifting organizational norms, and rising expectations around responsibility and impact. It explored how the "default settings" we once relied on no longer hold, and why redefining them has become a strategic necessity rather than a philosophical exercise. By introducing the next speakers, we move the conversation forward: from identifying emerging paradigms to examining how they translate into practical, human-scale decisions. The following four speakers represent this shift in focus: spanning long-term AI futures, ethical and human-centered design, and the realities of building and shipping AI systems at a global scale. Offering grounded perspectives on what the new default looks like when vision meets execution.
Who Are the Voices Defining The New Default?
A single discipline or worldview is not leading the shift toward AI-native organizations; it is emerging from the convergence of foresight, design, and execution. Steve Brown approaches AI from the long arc of infrastructure and societal change, asking what happens when intelligence becomes embedded and invisible. Gillian Salerno-Rebic and Maria Burke ground the conversation in human craft, organizational structure, and collaborative intelligence, showing how teams and roles must be redesigned, not erased, as AI enters the workflow. James LePage brings the lens of large-scale engineering, focusing on what it takes to ship AI responsibly inside real products, real teams, and real constraints. The voices shaping The New Default reflect this convergence. Together, they offer a cohesive view of AI not as a tool to adopt, but as a system to design, one that reshapes how work actually happens and who is accountable for it.
Meet Steve, Gillian, Maria, and James.
Steve Brown, AI Futurist; former executive at Google DeepMind and Intel
Steve Brown's perspective on AI is shaped by decades spent at the intersection of frontier research and real-world deployment. Having worked across organizations such as Google DeepMind and Intel, Brown has seen how intelligence moves from experimentation to enterprise adoption and, eventually, to something far more consequential: infrastructure. His contribution to The New Default conversation centers on a critical shift many organizations are underestimating: AI is no longer an optional capability at the edge of systems, but a force that is steadily redefining how work, value creation, and decision-making operate at their core.
In his interviews for The New Default, Brown frames this transition as the rise of a hybrid-intelligence workforce, in which humans and machines no longer operate in clearly separated roles but as interdependent systems that continuously shape one another. As AI systems take on more cognitive labor, analysis, synthesis, and prediction, the nature of work itself changes. The challenge, Brown argues, is not technical feasibility, but organizational readiness: most companies are still structured for a world where intelligence is scarce, slow, and human-bound, rather than abundant, fast, and increasingly automated.

"I think what we're talking about with AI is just another layer of abstraction. Instead of building a database, right, you'll be building a digital employee, um, an agent that has certain characteristics and basically a job spec".
Steve Brown's thinking points leaders, product teams, and engineers toward a critical reorientation: how to approach AI deliberately, before it settles into the invisible infrastructure of everyday operations.
System-level architecture thinking: Brown's work suggests that teams must look beyond individual models or services and design AI as part of a broader socio-technical system. This means treating data flows, decision rights, escalation paths, and human override mechanisms as architectural concerns rather than afterthoughts. When intelligence becomes embedded everywhere, the surrounding system determines whether outcomes are resilient or brittle.
Organizational readiness over raw capability: A recurring theme in Brown’s writing is the widening gap between what AI can do and what organizations are prepared to absorb. Teams should invest as much effort in governance models, operating principles, and cultural alignment as they do in model performance. Without this, AI amplifies existing dysfunction rather than fixing it.
Workflow redesign for hybrid intelligence: As humans and machines increasingly collaborate, workflows must be rethought around shared cognition. Brown emphasizes designing clear handoffs between human judgment and machine inference, defining where automation ends and accountability begins. This affects everything from product decisions to internal operations and customer-facing experiences.
Leadership and decision-making evolution: Brown's futurist lens highlights that AI adoption is ultimately a leadership challenge. Leaders must move from reactive adoption—responding to tools and trends—to intentional design, where values, incentives, and long-term consequences are considered upfront. This requires comfort with uncertainty and the ability to make irreversible choices consciously.
Strategic clarity in an intent-driven world: As systems shift from rigid instructions to being guided by human intent, Brown warns that ambiguity becomes dangerous. Organizations need sharper articulation of purpose, goals, and ethical boundaries, because AI systems will execute intent at scale. Vague strategy is no longer neutral—it is actively risky.
Competitive advantage through foresight: For software companies and consultancies, applying Brown’s framework enables a different kind of value: helping clients anticipate second- and third-order effects of AI, not just implement features. This positions teams as long-term partners in system design, rather than short-term implementers of rapidly commoditizing tools.
In short, Steve Brown's message reframes AI adoption as a matter of intentional system design amid accelerating intelligence. Teams that internalize this mindset are better equipped to shape the defaults they will later be forced to live with, rather than inheriting them once change is no longer optional. What places Steve Brown firmly within The New Default narrative is his ability to challenge techno-optimism without rejecting progress. He does not argue that AI should slow down; he argues that leadership thinking must speed up.
Gillian Salerno-Rebic & Maria Burke Co-Founders, North + Form
Gillian Salerno-Rebic and Maria Burke are seasoned designers and business leaders who helmed North + Form, a consultancy focused on UX research, product design, and AI-driven strategy that blends big-tech thinking with practical design sensibilities. Their careers and leadership in UX and strategy have given them a distinctive vantage point on how human experience and organizational design must evolve alongside AI adoption to preserve quality, meaning, and competitive edge.
Drawing on their work with product and design teams, Gillian and Maria bring a thoughtful and human-centered perspective to the question of how teams adapt to intelligence that augments rather than replaces human work. Their insights address not just the tools of AI, but the craft, structure, and roles through which humans and machines collaborate effectively.

"Look at your department right now. Look at your job. List out all of your tasks... What can AI automate already, and what will AI automate? Then you're gonna take all the things that are left, regroup them, and shuffle them. Those are gonna be the new job roles of the future. I'm more afraid that if I am not at the forefront of adopting this stuff, someone else who knows how to use AI better is going to say, that's the person who's gonna take my job. People who are still afraid, hesitant, or wary of ChatGPT, I'm more wary of you than I think I am of the AI, because of where we are right now and the opportunity we have to have deeper discussions".
Across several The New Default pieces, Gillian and Maria present a cohesive view: AI should amplify human expertise, not obscure it. Thoughtfully integrated into organizational structures and treated as a collaborative force rather than a shortcut. They argue that deep, analog experience, such as hands-on design practice, UX research, and human-centered thinking, develops the judgment needed to use AI well, helping teams decide what to automate and where human insight remains essential.
From there, they extend the conversation beyond skills to structure, reframing AI not as a threat to jobs but as a catalyst for redistributing work: automating repetitive tasks to free people for higher-value, strategic, and relational contributions. Ultimately, they push teams to go one step further by treating AI as a "third co-founder,” an embedded team member and part of organizational infrastructure—capable of accelerating outcomes while still operating under clear human intent, oversight, and responsibility.
From their perspective, successful AI adoption requires several practical shifts:
Preserve and elevate human expertise: AI should not shortcut mastery—teams benefit most when they’ve already internalized the fundamentals and use AI to extend, not replace, that skill.
Reallocate work thoughtfully: Rather than viewing AI as a redundancy engine, organizations should analyze tasks to distinguish what can safely be automated and what should be retained as strategic, creative, or relational human work.
Treat AI as a collaborator, not a tool: Embedding AI into core workflows—ideation, research, analysis, synthesis—means designing organizational roles and decision rules that incorporate both human and machine strengths.
Balance efficiency with quality and context: Using AI to accelerate delivery is valuable, but not at the expense of contextual insight, empathy, or design judgment—elements that remain uniquely human.
Cultivate AI fluency across teams: Continuous learning, experimentation, and openness to AI as a colleague help reduce resistance and unlock innovation across disciplines.
At the core of Gillian and Maria's message is an insistence on human judgment, craft, and relational intelligence as irreplaceable anchors in the era of AI. Their work reframes AI adoption from a race to automate to a redistribution of cognitive labor, in which teams intentionally design roles, processes, and skills that harness both human and machine strengths.
James LePage, Director of Engineering, AI at Automattic
James LePage leads AI engineering at Automattic, the company behind WordPress.com, WooCommerce, Tumblr, and a broad ecosystem of products used by hundreds of millions of people worldwide. Operating inside one of the most distributed organizations in tech, LePage sits at the intersection of large-scale software systems, autonomous teams, and applied AI in production environments. His work is grounded not in experimentation for its own sake, but in the realities of shipping, maintaining, and governing AI across diverse products and globally distributed teams.
Drawing on Automattic's long-standing culture of autonomy and async collaboration, LePage brings a distinctly pragmatic perspective to AI adoption. Rather than treating AI as a centralized capability owned by a single team, his thinking reflects the challenges of enabling intelligence across an organization where speed, trust, and local decision-making are core operating principles. His insights focus less on what AI can do in theory and more on how it actually works when embedded into real products, workflows, and teams at scale.

"We win in applied AI. That's where we win because we have the distribution, we have the users, we have the data... We don't win by training our own foundation models. An example I like is that Databricks trained a model that was the leading model on benchmarks for three days, and then they never did that again because they won on something else. They don't win on the foundational stuff, unlike building an entirely new WordPress that doesn't have the distribution of 43% of the internet. Our approach is let's become the best possible applied AI company in the world for the open web and use what we already have and infuse it".
From LePage’s perspective, successful AI adoption depends on a set of practical, execution-level shifts that reshape how teams build, decide, and collaborate:
Distributed intelligence over central control: In The Autonomy Advantage, LePage argues that AI delivers the most value when intelligence is pushed closer to where decisions are made, rather than centralized behind rigid approval layers. Empowering teams with AI-augmented autonomy enables faster iteration, better local judgment, and systems that scale without becoming bottlenecks.
Development becomes continuous and synchronous: LePage describes how AI collapses traditional boundaries between planning, building, and execution. With AI assistance, development increasingly happens “in the meeting,” as ideas are explored, validated, and prototyped in real time. This shift forces teams to rethink feedback loops, collaboration norms, and what it means for work to be “done.”
Applied AI over disruptive reinvention: Rather than advocating for sweeping rewrites or wholesale disruption, LePage emphasizes incremental, applied AI. He outlines a playbook for embedding AI into existing systems in ways that compound value over time - prioritizing reliability, observability, and trust over novelty.
Engineering for trust, not just capability: At Automattic's scale, AI systems must be understandable, debuggable, and safe to operate. LePage’s approach highlights monitoring, guardrails, and human-in-the-loop design as first-class engineering concerns, ensuring AI augments teams without undermining accountability.
Team fluency as an engineering responsibility: AI is not just a platform decision but a skills challenge. LePage emphasizes enabling engineers and product teams to understand how AI behaves in practice—its strengths, limitations, and failure modes—so that better design decisions are made closer to the code.
At the core of James LePage’s message is a deeply operational truth: AI creates leverage only when it aligns with how organizations already work. His perspective reframes AI adoption from a disruptive event into a continuous engineering discipline, one that favors distributed intelligence, tight feedback loops, and pragmatic integration over centralized control and theoretical elegance.
The Bigger Picture: Four Perspectives, One Shift
Across all four perspectives, a clear pattern emerges: the future of AI is shaped less by speed and more by intentionality. Steve Brown, Gillian Salerno-Rebic, Maria Burke, and James LePage each argue, through different lenses, that accelerating adoption without rethinking underlying systems only hardens the wrong defaults.
A shared emphasis runs through their work: human judgment must evolve alongside machine intelligence, not be sidelined by it. Whether framed as foresight, craft, or operational accountability, all four positions AI as something that amplifies outcomes, good or bad, depending on the clarity of human intent guiding it.
Equally important is their systems-level thinking. AI is not a feature to add, but an infrastructure that reshapes workflows, roles, and decision-making. Futurism, design, and engineering converge here into a new professional baseline, one where responsibility for AI's impact is shared across disciplines.
For founders, product leaders, and engineers, the implication is direct: competitive advantage will come not from adopting AI first, but from deliberately adopting it, designing systems that remain human-led, resilient, and adaptable as intelligence becomes embedded everywhere.
The New Default Is Still Being Written
The new default is not a finished state or a destination we arrive at; it is something being actively written in real time. The speakers highlighted here are not predicting the future from a distance; they are shaping it through deliberate choices about systems, teams, and technology. Their work reminds us that defaults are never neutral; they are designed, inherited, or left unexamined.
For founders, product leaders, and engineers, the invitation is clear: question the assumptions you're operating under, approach technology with foresight and care, and take responsibility for the systems you help put into the world. The future will be shaped either by conscious design or by convenience, and the new default will reflect whichever choice we make.
:quality(90))
:quality(90))
:quality(90))
:quality(90))
:quality(90))
:quality(90))