All articles
Australian small business owner reviewing AI agency proposal at desk with concerned expression, salesperson gesturing in background

Before You Hire an "AI Agency": 5 Questions to Ask First.

An AI agency pitches a 2-week, $1,500 build. Sounds cheap. New Georgia Tech research shows AI-built software just caused 35 critical security flaws in March 2026 alone. Five questions every Australian small business owner should ask before signing.

You get a Facebook ad. Or a LinkedIn message. Or a WhatsApp from someone claiming a friend referred them. The pitch is some version of this:

"We build AI-powered apps for small business in 2 weeks. Custom booking system, customer chatbot, automated invoicing — all integrated with your existing tools. $1,500. Done."

It sounds appealing. It is faster than anything you have heard before. The price is half what a normal developer would charge. The agency has a slick website and a few testimonials.

There is a chance this works out fine.

There is also a fast-growing chance the app they hand you has security holes baked in from day one — and that those holes will be exploited within months.

What the data actually shows

In March 2026, researchers at Georgia Tech's School of Cybersecurity & Privacy published the first major dataset on what is now called "vibe coding." Vibe coding is the practice of using AI tools — Claude Code, GitHub Copilot, Gemini — to generate software, then shipping that software to production without reviewing it.

The dataset, called the Vibe Security Radar, scanned more than 43,000 public security advisories. It identified vulnerabilities directly caused by AI-generated code.

The numbers are striking:

  • In the second half of 2025: about 18 confirmed cases across seven months.
  • In Q1 2026 alone: 56 cases.
  • In March 2026 alone: 35 cases. More than all of 2025 combined.

Of the 74 confirmed cases the team has logged so far, 14 were rated critical — meaning attackers could fully compromise the system — and 25 were rated high risk.

Researcher Hanqing Zhao put it plainly: if you ship AI output to production, you need to review it the way you would review a junior developer's first pull request. Especially anything around customer login and customer data.

The agencies pitching a 2-week, $1,500 build are doing the opposite. Skipping that review is the only way to hit the price and the timeline.

What "vibe coding" actually means for you

Imagine hiring a builder to put up a small extension on your shop. You agree on a 2-week timeline at half the normal price. The builder uses an AI tool to design the layout, lay the foundations, and frame the walls. The work is fast. The walls go up. The doors close. The lights come on.

The builder does not check that the foundations match local soil. Does not check that the load-bearing wall actually bears load. Does not check that the wiring is legal under Australian electrical standards.

Six months later, after the first storm, the wall collapses. You discover the wiring is non-compliant. You also discover your insurance will not cover any of it.

Software works the same way. Vibe-coded software looks fine from the outside. It logs in. It takes payments. It sends customer messages. Until one day a vulnerability is exploited and:

  • Customer credit card details leak
  • Your customer database is downloaded by someone you have never heard of
  • The booking system is hijacked to send phishing messages to your clients
  • You wake up to a ransom email

This is not theoretical. The Georgia Tech data is built from real, public, exploited vulnerabilities. Most were shipped by people who skipped the review step.

Why this hits Australian businesses harder

Under Australia's Notifiable Data Breaches scheme (Privacy Act 1988), if customer data leaks because of a serious breach, you must notify the Office of the Australian Information Commissioner and every affected customer within 30 days of discovering it.

If the agency that built your app vibe-coded its way through your booking system, and a customer's personal data leaked because of a security flaw they never reviewed, the OAIC asks you, not them. You signed the contract. You uploaded the data. You are the controller.

The agency keeps their $1,500. You inherit the breach.

The 5 questions to ask before signing

Before you sign anything with an AI agency, freelancer, or any developer who says they "build with AI", ask these five questions in writing. The answers tell you more than any portfolio.

1. "Who reviews the AI-generated code before you deliver it? What is their qualification?"

What you want to hear: a named senior developer with stated experience. A clear statement that no code ships without review.

What should worry you: "AI doesn't make mistakes" or "we test everything thoroughly" without naming a human reviewer.

2. "If a security vulnerability is found after delivery, who fixes it, and how fast?"

What you want to hear: a written commitment. "24-hour response and 7-day fix on critical issues, included in the support contract."

What should worry you: "we're confident there won't be any" or "that's in our standard warranty" without specifics.

3. "Give me three Australian clients I can call directly, with their permission, that you delivered for in the last 12 months."

What you want to hear: three names, contact details, agency offers to make the introduction. Australian clients matter — they have skin in the game on Australian privacy law.

What should worry you: a list of logos with no contact details, or "our clients prefer privacy" deflection.

4. "What insurance do you carry, and does it cover damage caused by AI-generated code specifically?"

What you want to hear: Professional Indemnity insurance with a policy number, sum insured of at least $1 million, and Cyber Liability cover separately.

What should worry you: "we don't need insurance" or "that's not standard for our type of work."

5. "Show me your code review checklist. What do you specifically check around customer login and customer data handling?"

You do not need to understand the technical answer. You need to see that there is an answer. If they cannot show you a checklist, they do not have one. If they cannot describe what they check, they are not checking.

Georgia Tech's researchers specifically named login (they call it "authentication") and data handling (they call it "input handling") as the two highest-risk areas in vibe-coded software.

What good looks like

A reliable AI-assisted developer will answer all five questions calmly and in writing. They will not get defensive. They will probably appreciate the questions, because most of their competitors cannot answer them.

A red-flag agency will rush you. They will talk about how their AI is "different." They will offer a discount if you sign today. They will push back on the questions as "overkill for a small business."

The discount is the bait. The vibe-coded app is the hook. Your customer data is the catch.

You don't need to become a software expert to protect your business from this. You need five questions and the willingness to walk away if the answers don't come.

This is the counterpoint to the previous post in this series, which showed why doing AI boringly — one task at a time — is what actually grows businesses 2.8x faster. The fastest way to lose those gains is to outsource the build to someone who skipped the review step. If you have not started yet, this post shows where to start without hiring anyone.