NSFW AI Is Here

and It’s a Wake-Up Call for Business

Hey friends,

Last week, I did something I immediately regretted.

I turned on Grok’s “Sexy Mode.”

Did you know about Grok’s Spicy & Sexy Modes?

At first, it was hilarious. Then it was deeply uncomfortable. Within seconds, the AI was flirting with me like it was auditioning for a soap opera.

Funny for a moment, but also terrifying.

Because this is the same AI platform business leaders are experimenting with at work — and it’s trained on everything from Twitter to OnlyFans. That means it can also generate fake, explicit content of real people. One journalist even asked Grok to make a topless image of Taylor Swift… and it did.

That’s not “innovation.” That’s danger in plain sight.

If you’re leading a team, I want you to understand what this means — and what we can learn from it.

When AI Stops Being Safe for Work

Models like Grok are built differently. They’re trained on whatever’s out there — no filters, no boundaries, no age verification.

So if your business connects this kind of model to your tools, files, or calendars, you’re exposing sensitive data to a system that can’t be trusted.

And here’s the scary part: people are still using it.

I’ve already met business owners saying, “We’re testing Grok because it can handle large documents.” Sure, it can — but it’s also unregulated. If it can generate something unsafe, imagine what it can do with your internal data.

So here’s my rule:

If you wouldn’t want it indexed, don’t put it in.

Responsible AI Isn’t Optional

Australia’s National AI Centre just released its AI Adoption Report, and the results were both promising and worrying.

The good news: adoption is rising. The bad news: many businesses still don’t know how to use AI safely.

43% say they check AI outputs before they reach customers.
57% don’t.
😬 18% admit they have no responsible AI practices at all.

That’s not a tech issue — that’s leadership.

At the very least, every organisation should have:

  • Human review before AI content goes public.

  • A basic AI policy (we have a free one at Dumb Monkey AI Academy).

  • Regular checks to make sure tools behave as expected.

If you’re serious about scaling with AI, governance has to come first.

Why Construction and Manufacturing Are Falling Behind

One finding that surprised me was how low AI adoption is in construction and manufacturing.

These are industries built on process, compliance, and documentation — everything AI is good at. Yet the uptake is slow.

When Davina and I spoke about this on the podcast, we agreed it’s not about capability, it’s about culture.

Here’s an example: we built an AI agent that connects to the National Construction Code. You can literally ask it, “What’s the minimum stair height for a house in Victoria?”

It searches the code, finds the right clause, and gives you the answer instantly.

That’s hours of compliance time saved — safely.

AI isn’t just for tech startups; it’s for anyone who deals with repetitive rules and paperwork.

The Right Tools for the Job

Not every AI model is created equal.

If you’re working with business data, stick to tools that prioritise enterprise safety:

  • Microsoft Copilot (M365) — secure, built for teams.

  • ChatGPT Plus — powerful, consistent, well-regulated.

  • Claude by Anthropic — great reasoning and strong ethics.

Where AI Is Heading Next

Two big trends are taking shape:

  1. Reasoning Models – Tools that “think” about their answers before replying. ChatGPT-5 is the first real step.

  2. AI Agents – Assistants that can actually do things: browse the web, fill forms, post jobs, send emails.

I’ve been testing Comet, a browser by Perplexity, that can act for you. I can literally say:

“Write a job ad, post it on Seek and LinkedIn.”

And it gets it done.

It’s incredible… and risky. Power without governance becomes chaos fast.

The Real Lesson

AI isn’t slowing down. The only question is how responsibly we move with it.

The Grok episode reminded me of something simple but crucial:

Adopt fast. Govern faster.

Build your policy. Train your team. Keep humour in the mix — but keep “spicy mode” far away from your business.

Let’s make AI actually work at work.

🛠 AI Tip of the Week

Red-Team Your Model Before You Trust It

Before you roll any AI tool into production, push it to its limits. Ask unsafe or impossible prompts in a sandbox environment to see how it behaves.

If it refuses unethical or illegal requests → safe to explore further.

If it complies → walk away.

Governance starts before the first real input.

— Aamir Qutub CEO | Enterprise Monkey | Dumb Monkey

🎧 Episode 4: YouTube | Spotify | Apple

🧰 Dumb Monkey AI Academy — free AI policy template and practical training for responsible adoption

📱 Download the Dumb Monkey AI Academy App: IOS or Android

📘Read Aamir's book (an AI playbook disguised as fiction for easier readability): The CEO Who Mocked AI (Until It Made Him Millions)