The New Default. Your hub for building smart, fast, and sustainable AI software
From Insight to Execution: Designing the New Default
Building on the foundation set by the first two articles, this third installment in The New Default speaker series shifts the focus from framing the challenge to examining what it takes to operate inside it. If the earlier pieces explored why old defaults no longer hold and how leadership, ethics, and long-term thinking must evolve, this article zooms in on the mechanics of execution: how AI is reshaping systems, roles, and decision-making at the ground level.
Based on the insights of Krzysztof Zabłocki, Anna Barnacka, and Zbigniew Sobiecki, a common theme emerges: AI amplifies outcomes, not intent, and without deliberate structure, context, and human judgment, speed quickly becomes fragile. What follows is not a celebration of AI capability, but a practical exploration of how leaders and builders should rethink control, expertise, and system design as intelligence becomes embedded everywhere.
Key takeaways:
AI success depends on system design, not model power. Context, control, and architecture determine outcomes more than prompts or raw capability.
Domain knowledge and structured context are essential. AI performs reliably only when grounded in real-world constraints and expert understanding.
Speed without structure leads to fragility. Rapid AI-driven development must be balanced with determinism, validation, and clarity.
Human judgment becomes more critical. People define intent, boundaries, and risk while AI executes at scale.
Advantage comes from intentional adoption. Teams that design resilient, human-led AI systems will outperform those that simply adopt AI fastest.
Who Are the Voices Shaping The New Default?
As AI fundamentally changes how software is built, expertise is applied, and teams operate, the most valuable insights are emerging from practitioners working at the sharp edge of real systems. The following article brings together three such voices: Krzysztof Zabłocki, Anna Barnacka, and Zbigniew Sobiecki. Coming from software engineering, deep science, health tech, and AI-driven product development, they approach AI from different angles yet converge on a shared realization: artificial intelligence alone is not enough. Context, control, domain knowledge, and human judgment increasingly determine whether AI creates durable progress or accelerates fragility. The New Default perspectives collectively map how roles, workflows, and systems must evolve as AI becomes embedded into the foundations of modern work.
Krzysztof Zablocki, Principal Swift Engineer, Creator of Sourcery
Krzysztof Zabłocki is a Principal Swift Engineer with over a decade of experience. He led the development of apps for The New York Times and Headspace. He is the author of Sourcery, a Swift metaprogramming tool used by more than 40,000 apps, including Airbnb, Bumble, and The New York Times. Combined, his open-source libraries and tools are used by more than 80,000 apps. He has delivered talks at over 50 conferences and meetups worldwide, with his iOS architecture talk among the most viewed in the community. As co-founder of Pixle, he was the sole engineer on 5 applications—all featured by Apple. His consulting work has saved clients millions of dollars through engineering process improvements.
In The New Default, Krzysztof shares his evolved approach to AI-augmented development: how he orchestrates multiple AI models, structures context for consistent results, and maintains quality through systematic automation. Drawing on years of experience with large, real-world codebases, Krzysztof Zabłocki argues that the key constraint in AI-driven software development has shifted away from clever prompts toward how well AI is situated within systems through context and control.
He frames large language models not as predictable tools but as probabilistic collaborators whose effectiveness depends on deliberate context engineering, structuring inputs, constraints, and feedback so AI can function safely and consistently inside production environments. From this view, coding is not disappearing but evolving: engineers increasingly create the rules, safeguards, and intent frameworks around AI-generated output, with long-term advantage accruing to teams that design robust human–AI systems rather than simply adopting AI the fastest.

It used to be prompt engineering, and now it's like the context window engineering. I think that's one of the biggest things.
Across the interviews, Zabłocki presents a unified perspective on why reliable AI-driven software depends less on clever prompts or smarter models and more on intentional constraints, clear human judgment, and a shift from writing code to orchestrating systems.
Context engineering replaces prompt craftsmanship
Zabłocki's central thesis is that AI performance in real software environments is driven far more by structured, intentional context than by clever prompting. He argues that prompts are a thin interface, while context, codebase boundaries, architectural constraints, historical decisions, tests, and intent form the true operating environment for AI systems.Reliability emerges from control systems, not smarter models
A recurring idea of Krzysztof Zabłocki's is that AI-generated code cannot be made reliable through model upgrades alone. Instead, reliability is an emergent property of control mechanisms: validation layers, deterministic constraints, feedback loops, and clearly defined failure modes. He reframes AI as a non-deterministic component that must be surrounded by guardrails, much as distributed systems are designed to tolerate failure rather than assume correctness.Programming is shifting from production to orchestration
Rather than predicting the death of coding, Zabłocki describes a role evolution. Developers move away from manually producing code and toward orchestrating intent, verification, and boundaries. The value of an engineer increasingly lies in defining invariants, shaping system constraints, and deciding what AI is allowed to do.Human judgment becomes more important, not less
Across all three pieces, Zabłocki emphasizes that AI amplifies both good and bad decisions. As automation increases, human judgment shifts upstream: deciding goals, trade-offs, and acceptable risk before AI executes at scale. The cost of unclear intent rises sharply because AI systems will confidently act on ambiguity.
Dr. Anna Barnacka, CEO & Founder of MindMics, Former NASA Einstein Fellow
Anna Barnacka, Ph.D., is a Polish astrophysicist turned entrepreneur and health-tech innovator, best known as the founder and CEO and Founder of MindMics, the actionable health monitoring company. MindMics has developed a breakthrough in precision monitoring that captures unique human audiome health data through earbuds, using infrasonic hymodynography. With dual Ph.D. degrees in astronomy and physics from the Polish Academy of Sciences and Paris-Sud University, and a NASA Einstein Fellowship at the Harvard-Smithsonian Center for Astrophysics, she built a distinguished academic career researching gravitational lensing, black holes, and high-resolution observational techniques. Barnacka bridges deep scientific research with consumer-focused medical technology by translating complex instrumentation and data analysis into scalable health solutions.
In The New Default, Anna discusses how a small team is building clinical-grade wearable technology by partnering strategically rather than scaling headcount, and how AI enables the biosignal layer that powers next-generation health insights. Drawing on her experience, Anna Barnacka frames AI as a powerful accelerator whose impact depends less on raw model intelligence and more on the quality of domain understanding embedded around it. Across her interviews, she argues that AI without contextual grounding fails quickly, while domain models, formal representations of how systems behave in the real world, provide the scaffolding that allows AI to scale expertise safely and effectively.
Barnacka highlights how AI collapses development costs and multiplies small teams by offloading execution and synthesis, but stresses that true leverage comes when human experts encode assumptions, constraints, and validation logic upfront. In this way, AI does not replace expertise but redistributes it, lowering barriers to entry, breaking traditional industry gatekeepers, and enabling non-experts to access expert-level capabilities.

I can do the job of like five people now, using the AI tool (...) and make sure that our protocols comply. And that's the key for us. That's why even with the smaller teams, we are able to process and achieve pretty good results pretty quickly.
Anna presents a clear, consistent view of how AI creates real value when it is deeply grounded in domain expertise rather than treated as a generic solution. Her core ideas focus on AI as a multiplier of human knowledge, team capacity, and execution speed, especially in complex, regulated, or expertise-heavy fields. Taken together, Anna Barnacka argues that the biggest breakthroughs come not from bigger models, but from using AI to lower structural barriers that once limited who could build, validate, and compete.
Domain-aware integration over generic AI
Anna stresses that AI systems succeed only when they're grounded in domain context and high-quality data, arguing that domain models and expert knowledge radically outperform generic AI in complex, field-specific applications. Relying on well-curated, relevant information is more valuable than feeding a model with large quantities of generic data.Quality data as a multiplier
She highlights the importance of quality over quantity in data: meticulously labeled, domain-specific datasets enhance AI usefulness far more than vast but poorly characterized datasets, particularly in regulated sectors like MedTech, where precision matters.AI as a productivity multiplier for small teams
Barnacka illustrates how AI can multiply team productivity by handling roles traditionally filled by multiple specialists. She describes AI not just as a tool but as a force that enables smaller teams to achieve results that once required larger, resource-intensive groups.Cost and time reduction in complex processes
She argues that AI dramatically lowers both time and financial barriers by automating labor-intensive steps, such as regulatory compliance documentation, allowing teams to focus expert time on validation rather than initial creation.Democratizing expert knowledge and breaking gatekeepers
Anna emphasizes that AI broadens access to specialist knowledge, reducing reliance on expensive consultants and traditional industry gatekeepers, which enables faster prototyping, validation, and innovation even for teams with limited resources.
Zbigniew Sobiecki, Co-Founder & CTA of Proofs, Creator of Shotgun
Zbigniew Sobiecki is Co-Founder and CTO of Proofs, an AI-powered platform that enables sales engineering teams to build fully customized proofs of concept in minutes rather than months. He is currently building Shotgun, a CLI-based tool for spec-driven development that helps developers working with AI agents ship production-ready code. Sobiecki has co-founded and led multiple tech ventures, built and advised development teams, and played pivotal roles at companies such as Lite e-Commerce and Forward Operators AI Lab, where he focused on applying artificial intelligence and cutting-edge engineering to real-world business challenges. At Proofs, he combines deep technical expertise with strategic product insight to automate complex development workflows, enabling companies to create tailored software solutions rapidly from a single prompt and significantly streamline sales and implementation processes.
In The New Default, Zbigniew explores why specification is becoming the new IDE, how context engineering replaces prompt engineering, why teams are no longer constrained by their collective experience, and how test suites serve as scaffolding for non-deterministic development. He envisions a future of "infinite teams" where AI agents act as specialized collaborators, expanding the breadth of expertise available to development efforts and enabling diverse skill sets to contribute more effectively. Central to making AI dependable, Sobiecki emphasizes the emerging discipline of context engineering, which designs systems and control layers that guide AI's probabilistic outputs toward predictable, testable outcomes.

We're all, from what I've heard, becoming context engineers one way or the other, trying to figure out which pieces of information to put in what order and how to feed it into the LLM to make sense of it.
Zbigniew articulates a cohesive set of ideas on the risks and opportunities of AI-accelerated software development. Taken together, he frames AI as a force that expands teams and capabilities, but only delivers lasting value when paired with strong architectural judgment, disciplined context management, and systems designed to remain understandable and predictable under rapid change.
Balance speed with system robustness
Sobiecki warns against the “velocity paradox,” where rapid feature output enabled by AI can create excessive system complexity that ultimately slows teams down. He argues that maintaining software determinism and human understanding of architecture is essential to prevent fragility as development accelerates.Context and communication are vital in AI-assisted workflows
As AI introduces fragmentation across models and agents, he stresses the need for structured communication and context management so that AI outputs remain coherent and integrated, rather than leaving teams juggling partial or inconsistent information.AI expands the concept of the development team
Sobiecki envisions “infinite teams” where diverse AI agents with different specializations act as integrated collaborators, breaking traditional skill and role boundaries and enabling even non-technical contributors to participate meaningfully in software creation.Judgment and exploration eclipse narrow technical skills
With AI handling routine tasks, Zbigniew argues that teams should prioritize judgment, research, and exploration over familiar toolsets or narrow technical expertise, thereby encouraging broader strategic thinking about architecture and product design.Determinism through context engineering ensures predictable AI behavior
Sobiecki highlights context engineering as a discipline for aligning probabilistic AI outputs with predictable system behavior, where specifications and control systems replace rigid coding structures to make AI more trustworthy and reliable.
Is AI Reliable Without Explicit Context and Human Oversight?
Across the perspectives of Krzysztof Zabłocki, Anna Barnacka, and Zbigniew Sobiecki, a shared conclusion crystallizes: the decisive challenge of AI is no longer raw capability, but system design around intelligence. All three reject the idea of AI as a standalone solution, whether in software engineering, science-driven innovation, or organizational scaling, and instead emphasize context, control, and human judgment as the true levers of progress.
Zabłocki highlights the need to embed AI within deliberate architectural constraints, Barnacka underscores domain knowledge as the foundation that makes AI useful and trustworthy, and Sobiecki warns that speed without determinism produces fragile systems despite apparent gains. Together, they frame AI as an amplifier that magnifies intent, structure, and decision quality, pushing leaders, engineers, and teams toward a common imperative: competitive advantage will belong to those who intentionally design resilient human-AI systems, not those who merely adopt powerful tools first.
Why Does AI Make System Design and Human Judgment More Critical?
Krzysztof Zabłocki, Anna Barnacka, and Zbigniew Sobiecki reveal a shared shift in how AI must be understood and applied as it becomes foundational to modern work. Across software engineering, scientific innovation, and AI-driven product development, all three argue that raw model capability is no longer the limiting factor; instead, outcomes are shaped by context, control, domain knowledge, and human judgment.
AI consistently appears as an amplifier, accelerating both strengths and weaknesses, making system design, determinism, and intent-setting critical responsibilities for teams and leaders. Rather than signaling the end of expertise or craftsmanship, their insights point to an evolution toward orchestration, validation, and stewardship, in which long-term advantage belongs to those who deliberately design resilient, human-led AI systems rather than pursuing speed or adoption for its own sake.
:quality(90))
:quality(90))
:quality(90))
:quality(90))
:quality(90))
:quality(90))