Goal: when a vehicle turns into my driveway, I don’t just want a generic “motion detected” alert — I want a notification that reads like a tiny security report: what showed up, where it is, and (when it’s visible) what the plate says.
In the video below, I walk through how I’m doing this today using a Reolink Duo 3, Home Assistant, and Google Gemini. The punchline is simple: treat your camera like an image sensor, grab a snapshot at the right moment, send it to an AI model with a tight prompt, and then turn the response into a rich, human-friendly alert.
Watch the video: If you want the full walkthrough (including my exact prompt strategy and what worked / didn’t at ~40 feet), the YouTube version is the canonical reference.
What this setup actually gives you
- Vehicle-aware alerts (“white pickup truck”, “blue sedan”, “delivery van”, etc.) instead of generic motion.
- Better context (e.g., “vehicle stopped at the mailbox” vs “vehicle pulled into the driveway”).
- Plate reading when possible (lighting and angle matter a lot — it’s not magic, but it can be surprisingly good).
High-level architecture
I’m intentionally keeping this “glue” simple so it stays reliable:
- Trigger: camera AI motion (or a driveway zone) fires in Home Assistant.
- Snapshot: Home Assistant grabs a still image from the camera.
- AI analysis: the snapshot is sent to Gemini with a prompt tuned for vehicle details (make/model/color, plate only if readable).
- Notify: Home Assistant sends a push notification containing the AI summary, plus the snapshot.
What matters most (the “gotchas”)
1) Prompting beats wishful thinking
The biggest improvement came from being explicit in the prompt about what I want and what I don’t want. For example: “If the license plate is not clearly readable, say ‘plate not readable’ — do not guess.” That one line is the difference between a helpful alert and an untrustworthy one.
2) Camera placement and lighting drive accuracy
At my distance (roughly 36–40 feet), the Duo 3 is a great fit because it gives me a wide view without forcing a tiny plate into a handful of pixels. Night performance and glare still matter; if you’re serious about plate reading you’ll care about angle, exposure, and whether headlights are washing out the frame.
3) Treat the result as “security context,” not forensic evidence
I use these notifications to decide whether I should pay attention right now — not to build a courtroom-grade record. The system is fantastic for “delivery vs neighbor vs unknown vehicle,” and it’s good enough to be genuinely useful even when plate reading isn’t.
Why I like this approach
What I love about doing this in Home Assistant is flexibility. I can swap cameras, adjust prompts, change the trigger logic, or route notifications differently (phone, tablet wall display, text-to-speech, etc.) without being locked into a camera vendor’s built-in AI feature set.
If you try this build and refine the prompt or logic in a clever way, I’d genuinely like to hear what you came up with — these “small automation upgrades” are where Home Assistant gets really fun.