At Tuesday’s Bondi AI Collective meetup at WOTSO Bondi Junction, the group dived deep into one of the most exciting — and controversial — trends in AI right now: autonomous AI agents, especially OpenClaw 🦞, how they’re being used, and what the implications are for workflows, security and the future of AI.
Instead of a formal talk, the session was a wide‑ranging conversation — building from how agent frameworks work, to hands‑on experiments, challenges, and the risks ahead.
🦞 OpenClaw — More Than Just an AI Chatbot
The central thread of the discussion was OpenClaw — an open‑source autonomous AI agent framework that lets AI do real work for you.
Here’s how people at the meetup described and explored it:
-
OpenClaw is built to run on your own system — though many people also host it on servers or cloud VMs. It’s not just a chat interface; it’s designed to take actions on your behalf, integrating with messaging apps, file systems, and tooling.
-
Its core is built around a set of workspace files that define who the agent is, what tasks it should do, its memory, and its schedule. The agent updates these files iteratively as it runs tasks.
-
A standout feature is persistent memory — the agent “remembers” previous interactions, preferences, and context across sessions. That’s what makes it feel more like a virtual assistant than a one‑off chatbot.
-
It can run on channels like Telegram, WhatsApp or Slack and act autonomously — automatically checking things or executing instructions — once you give it permissions.
-
People in the room described building dashboards to monitor when the agent is thinking, what files it’s using, its heartbeat (scheduled checks), and its outputs, so you don’t just have a black box.
💡 In the conversation, several builders explained how they connect OpenClaw to things like Notion API to automate resume generation, monitoring jobs and crafting custom outputs — essentially turning it into a workflow engine rather than just a text generator.
🔄 Do Agents Need LLMs?
A great question came up: Does an agent actually need a large language model (LLM)?
The group discussed this:
-
For deterministic tasks — like running a script or scheduling — you might not need a deep reasoning model.
-
For complex, multi‑step and adaptive tasks (interpreting goals, generating content, planning workflows), the language reasoning layer is valuable.
-
Some builders even mix models — using lighter, less costly models for routine tasks and switching to richer models when deeper reasoning is needed.
This reflects a key shift: agents orchestrate tools and actions, and LLMs supply the reasoning layer — but they aren’t always required for every task.
🛠 Hands‑On Experiments — What People Are Building
A number of real experiments were shared:
-
Dashboards that show agent status, memory files, heartbeat, and logs so humans can see what’s happening under the hood.
-
Automation workflows that generate business documents or manage Google Drive files.
-
A fun hack where someone connected OpenClaw to WhatsApp and configured it to read family messages about dinner and then create a Woolies order via command‑line interface — essentially turning a conversation into action.
-
People have built apps, like simple web apps, with a handful of messages to the agent handling everything behind the scenes — including watching agent actions by querying its internal loops.
These experiments show how agents can be extended to real work — not just answer questions, but perform tasks across systems.
⚠️ Security Reality — Not Just Theoretical
While the meetup was enthusiastic and exploratory, real world developments highlight important security concerns about OpenClaw:
📉 Security warnings from authorities: China’s industry ministry recently issued a warning that misconfigured OpenClaw deployments can pose significant cyberattack and data breach risks if left under default or weak settings. It urged stronger audits, identity authentication, and access controls.
🐙 Malicious ecosystem risks: Researchers have documented that OpenClaw’s ecosystem of “skills” — extensions published by the community to add functionality — has been abused. Hundreds of these skills have contained malware or behaved like malware delivery chains, giving attackers ways to steal credentials or run harmful commands if installed unwittingly.
⚠️ High‑impact vulnerabilities: Independent analyses have shown serious architectural risks — including remote code execution, lack of meaningful sandboxing, and agents running with broad system access — making the tool a high‑value attack surface if not carefully controlled.
These developments underscore that agents that execute real actions locally are fundamentally different from web chatbots — and users, especially non‑technical ones, need to take precautionary measures like sandboxing, permission scoping, and avoiding connection to sensitive systems without oversight.


🤖 Practical Safety Advice From the Community
In the meetup discussion and broader community responses:
-
Several people emphasised running agents in segmented environments or containers, with restricted permissions, so they don’t end up having full system access accidentally.
-
Some builders recommended treating agents like “labor” — with narrow scopes, clear kill switches, and explicit human approvals before critical actions.
-
Others highlighted the importance of logging and observability — so you can easily see what an agent did and stop it if something goes wrong.
These are essential “guardrails” when dealing with tools that can read files, launch commands, and interact with external APIs.
🧠 Big Picture — Autonomy, Innovation & Responsibility
-
Agents are changing workflows: We’re shifting from asking AI questions to giving AI tasks — and agents that can act autonomously are part of that shift.
-
Innovation is moving fast: From simple chat to persistent, multi‑tool automation in just a few years — this space is evolving rapidly.
-
Governance matters: As autonomous systems touch more of our data and platforms, human oversight, security boundaries, and responsibility become central to adoption.
-
Practical use vs hype: Not every agent needs to be “smart” — sometimes narrow, procedural automation is more predictable and safer.
🎯 Final Thoughts — Where We’re At
OpenClaw and its agent ecosystem represent a fascinating frontier in AI — tools that don’t just tell you what to do, but do things for you.
But that frontier comes with real challenges:
-
Powerful autonomy opens doors to productivity gains.
-
Broad access and extensibility raise serious security questions.
-
Human governance and safety practices are no longer optional — they’re necessary.
This meetup captured that tension well — the excitement of building and experimenting, balanced with an understanding that real‑world deployment requires care, boundaries, and awareness of risk.
If you enjoyed this conversation and want to explore more practical, hands‑on applications of autonomous AI — or dig into security and governance — join us at the next Bondi AI Collective meetup! 🐙🚀


