Become a member

Connect with the community and access events and resources

Events

From social mixers to pitch clubs, our events bring together like-minded individuals who share your passions.

Bondi AI Collective: Capturing Human Expertise with an OODA Loop and the Implications

8 July, 2025

Our latest Bondi AI Collective gathering was a sell out! A wonderful group of AI afficionados from Bondi and further afield gathered to delve into the edge of AI.

This session featured Ebenezer Eyeson-Annon of UNCAPT. Below is a summary what Ebenezer shared on UNCAPT and its journey, followed by the key themes of the vigorous discussion among the group that his demos and presentation sparked.

From Trading Systems to Enterprise Reasoning: The Evolution of UNCAPT

UNCAPT’s journey began with deep roots in engineering and physics, culminating in its founding five years ago as AI capabilities reached a new level of readiness. The team initially built closed-loop systems for autonomous trading—models that could detect patterns, make trades, identify exit signals, and act without human intervention. These early models demonstrated technical effectiveness and soon found traction in enterprise contexts, beginning with clients like OneWeb, a low Earth orbit satellite company.

As transformer-based language models began to emerge, UNCAPT recognized the potential for broader application and pivoted toward enterprise decision emulation—specifically exploring whether the role of a Chief Commercial Officer could be functionally replicated by AI. This led to a critical insight: the most transferable and impactful capability wasn’t just knowledge or pattern recognition, but reasoning. To capture and replicate this, UNCAPT built its own reasoning training platform—designed to model how experts think and decide—forming the backbone of what would evolve into their AI agent systems.

AI Agents for Negotiation and Performance Benchmarking

The concept of “autopilot mode” evolved into autonomous AI agents, first deployed in negotiation scenarios. A standout test included competing in the World Negotiation Championships against firms like PwC and Deloitte, where the AI outperformed in preference prediction and outcomes—offering a glimpse into the future of autonomous decision support in complex human domains.

Capturing and Training Human Reasoning via the OODA Loop

Inspired by the OODA loop (Observe, Orient, Decide, Act), the reasoning platform was trained by experts in real-time. This process highlighted a key insight: experts often refine their own reasoning only when presented with slightly incorrect alternatives. This iterative training loop enables the AI to reflect expert cognitive pathways, making it adaptable to diverse domains.

Deployment in Healthcare and Mental Health Applications

The reasoning AI agents have been applied across multiple healthcare domains including reablement (elderly care), mental health, neurodevelopment, and disability. In particular, they’re reducing the long delays in accessing specialist assessments—from 9 to 24 months—by replicating expert-level assessment and care coordination through intelligent agents.

Knowledge Systems and Generalizable AI Architectures

Complementing reasoning is a robust, generalizable knowledge engine that ingests curated expert content (papers, books, podcasts). Information is transformed into axioms, postulates, and structured knowledge clusters using embedding and clustering algorithms. Contradictions are surfaced for human review, ensuring accuracy and ongoing evolution of the knowledge base.

Interfaces and Human Oversight in Clinical Settings

The demonstration showed a real-world care planning interaction, where the AI conducted structured assessments, highlighted missing or inferred data, and required human review of assumptions before finalizing care plans. The architecture explicitly supports human-in-the-loop validation, particularly vital in clinical and high-stakes applications.

Confidence, Escalation, and Multi-Agent Oversight

A key principle is that humans should manage escalation based on the AI’s expressed uncertainty. While confidence estimation in LLMs remains imperfect, current systems can surface useful uncertainty indicators. Multi-agent architectures—where different AIs cross-verify reasoning—are already outperforming humans in specific domains, such as medical diagnostics, but depend on architectural design and task definition.

Human Roles in the AI Era: Escalation, Creativity, and Knowledge Stewardship

Looking forward, clinicians and professionals are likely to transition to roles focused on managing edge cases, validating AI outputs, and providing emotional/human-centered care. Like the shift from 80% of people in agriculture to 2% post-tractor, experts will become curators of risk, creative problem-solvers, and stewards of evolving knowledge banks.

Ethics, Accountability, and Risk Ownership

Discussion surfaced critical themes of AI accountability and human responsibility. While AI can support decisions, humans remain accountable—legally and ethically. Future models might include insurability of AI decisions (e.g., Munich Re’s hallucination insurance) and new professional roles centered on risk ownership and longitudinal care partnerships.

Scaling, Accessibility, and Jevons Paradox

As AI systems reduce the cost of expertise delivery, usage can scale massively—invoking Jevons Paradox, where lower costs drive greater consumption. This trend is particularly salient in domains with unmet demand, like healthcare, where AI democratizes access to expertise and accelerates intervention.

Engagement, Learning, and Cognitive Depth in Human-AI Collaboration

There was reflection on the cognitive impact of AI-generated outputs—will people meaningfully engage with AI-generated care plans or essays? It was suggested that interfaces prompting curiosity and deeper questioning (e.g., NSW’s EduChatter app) could transform AI into educational scaffolds rather than cognitive crutches.

Rethinking Organization, Structure, and Curiosity in the AI Age

With AI transforming how we store, search, and access information, the traditional emphasis on manual organization may decline. Instead, tagging, clustering, and retrieval are delegated to AI systems. Yet, fostering curiosity and structured inquiry—particularly among students—remains a human imperative that AI can amplify if deliberately designed.

Closing Reflections: Building a Collective AI-Human Intelligence Future

The session closed with reflections on how AI and humans can co-evolve, emphasizing that AI can either shape us passively or serve as a tool we deliberately shape. Communities like Bondi Innovation’s AI Collective offer a platform not just for demos, but for co-creating the future through active, engaged, and ongoing dialogue.

Sign up for our newsletter for all our upcoming events and community updates.

A touch of local inspiration in your inbox.

partners

major