Been seeing a pattern across teams I work with: developers use AI tools to generate infrastructure scripts, deployment configs, and monitoring setups without fully understanding what they produce.
Real examples from the past month:
Terraform/IaC footguns:
- AI-generated security groups that are too permissive (0.0.0.0/0 on ports that should be internal-only)
- Missing lifecycle blocks causing resources to be destroyed and recreated on apply
- Hardcoded AMI IDs that break when regions change
Shell script problems:
- Missing error handling (no
set -euo pipefail) - Unquoted variables that break on filenames with spaces
rm -rfwith variables that could be empty
Monitoring blind spots:
- AI sets up basic CPU/memory alerts but misses disk, inode, and OOM killer monitoring
- Generated dashboards that look pretty but do not actually help during incidents
- Missing alerts on certificate expiry, DNS propagation, and backup verification
What is working for us:
- shellcheck on every bash script in CI — catches the majority of shell footguns
- tfsec/checkov for Terraform — flags insecure resource configs before they deploy
- Pre-commit hooks that run linters automatically so nothing slips through
- Pair review for AI-generated infra — two humans review any AI-generated infrastructure change, even if a human-only change would only need one reviewer
The tools are useful but they produce confident-looking code that hides subtle problems. Ops teams are the ones getting paged at 3am when those problems surface.
Anyone else dealing with this? What guardrails are you putting in place?
You must log in or # to comment.

