← Back to Blog
Artifact RegistryCost OptimizationGCP

Artifact Registry Cleanup: Stop Paying for Thousands of Old Container Images

|8 min read

Every time your CI/CD pipeline pushes a new container image to Google Artifact Registry, it creates a new version. Those versions never delete themselves. Over weeks and months, a single repository can accumulate hundreds or thousands of image versions, each one quietly adding to your storage bill. Most teams do not notice until the invoice arrives.

The Hidden Cost of Container Image Sprawl

A typical CI/CD workflow builds and pushes a new container image on every commit to the main branch. If your team merges 10 commits per day, that is 10 new images per day per service. With 5 services, you are creating 50 new images daily. After six months, that is over 9,000 image versions sitting in Artifact Registry.

Container images are not small. A Go binary in a distroless image might be 30-50 MB. A Node.js application with dependencies can easily reach 200-500 MB. A Python ML service with model weights can exceed 1 GB per image. Even with layer deduplication, unique layers accumulate across versions.

The problem compounds with multi-architecture builds. If you build for both amd64 and arm64, each push creates two image manifests plus a manifest list. Your storage consumption effectively doubles.

The most insidious aspect is that this cost grows linearly and indefinitely. Unlike compute costs that scale with traffic, storage costs scale with time. A service that receives zero traffic still accumulates storage costs from old images that will never be pulled again.

Artifact Registry Pricing: What You Are Actually Paying

Google Artifact Registry charges for storage based on the total size of all artifacts in your repositories. As of early 2026, the pricing is:

  • Standard repositories: $0.10 per GB per month
  • Multi-region repositories (us, eu, asia): $0.10 per GB per month
  • Egress: Standard network egress rates apply when pulling images across regions

At $0.10/GB/month, 100 GB of old container images costs $10/month. That does not sound like much until you realize you have 15 repositories across 8 projects, each with years of accumulated images. A 500 GB estate of unused images costs $50/month, or $600/year, for artifacts that serve no purpose.

The first 0.5 GB of storage per project is free, but this is negligible for any real workload. Beyond the free tier, every image version you keep costs money from the moment it is pushed until it is explicitly deleted.

Manual Cleanup: gcloud Commands

The most direct approach to cleaning up old images is using the gcloud CLI. Start by listing all images and their tags in a repository:

# List all images in a repository
gcloud artifacts docker images list \
  us-central1-docker.pkg.dev/my-project/my-repo \
  --include-tags \
  --sort-by="~UPDATE_TIME" \
  --format="table(package,tags,version,UPDATE_TIME)"

# List images with no tags (usually safe to delete)
gcloud artifacts docker images list \
  us-central1-docker.pkg.dev/my-project/my-repo \
  --include-tags \
  --filter="tags=''" \
  --format="table(package,version,UPDATE_TIME)"

To delete specific image versions, use the delete command with the full image digest:

# Delete a specific image version by digest
gcloud artifacts docker images delete \
  us-central1-docker.pkg.dev/my-project/my-repo/api@sha256:abc123... \
  --quiet

# Delete all untagged images (bulk cleanup)
gcloud artifacts docker images list \
  us-central1-docker.pkg.dev/my-project/my-repo/api \
  --include-tags \
  --filter="tags=''" \
  --format="get(version)" | while read -r digest; do
    gcloud artifacts docker images delete \
      "us-central1-docker.pkg.dev/my-project/my-repo/api@${digest}" \
      --quiet
done

# Delete images older than 30 days
gcloud artifacts docker images list \
  us-central1-docker.pkg.dev/my-project/my-repo/api \
  --include-tags \
  --filter="UPDATE_TIME < '2026-02-11'" \
  --format="get(version)" | while read -r digest; do
    gcloud artifacts docker images delete \
      "us-central1-docker.pkg.dev/my-project/my-repo/api@${digest}" \
      --quiet
done

Manual cleanup works for one-off situations, but it does not scale. You need to remember to run it, you need to run it for every repository in every project, and you need to be careful not to delete images that are currently deployed. This is error-prone and unsustainable as your infrastructure grows.

Cleanup Policies in Terraform

Google Artifact Registry supports native cleanup policies that automatically delete old image versions based on rules you define. This is the recommended approach for teams that manage infrastructure with Terraform. Cleanup policies are defined directly on the repository resource:

resource "google_artifact_registry_repository" "docker" {
  location      = "australia-southeast2"
  repository_id = "services"
  format        = "DOCKER"
  description   = "Docker images for Cloud Run services"

  cleanup_policy_dry_run = false

  # Delete untagged images older than 7 days
  cleanup_policies {
    id     = "delete-untagged"
    action = "DELETE"
    condition {
      tag_state  = "UNTAGGED"
      older_than = "604800s"  # 7 days
    }
  }

  # Delete any image older than 90 days
  cleanup_policies {
    id     = "delete-old-versions"
    action = "DELETE"
    condition {
      tag_state  = "ANY"
      older_than = "7776000s"  # 90 days
    }
  }

  # Keep the 10 most recent tagged versions
  cleanup_policies {
    id     = "keep-recent-tagged"
    action = "KEEP"
    most_recent_versions {
      keep_count = 10
    }
  }
}

A few important details about cleanup policies:

  • KEEP rules take precedence over DELETE rules. If an image matches both a KEEP and a DELETE policy, it is kept. Always define KEEP rules to protect your currently deployed images.
  • Use cleanup_policy_dry_run = true first. This logs what would be deleted without actually deleting anything. Review the audit logs in Cloud Logging before setting it to false.
  • Duration is in seconds. The older_than field uses Go-style duration strings. 7 days is 604800s, 30 days is 2592000s, 90 days is 7776000s.
  • Cleanup runs asynchronously. After applying the policy, Google runs the cleanup in the background. Large repositories may take hours to fully clean up.

Automated Cleanup with Cloud Guardian

While Terraform cleanup policies are effective for repositories you manage directly, many teams have repositories created ad hoc, by other teams, or in projects where Terraform is not yet adopted. Cloud Guardian scans all Artifact Registry repositories across your connected GCP projects and detects repositories that are accumulating excessive image versions.

Cloud Guardian Detection

Cloud Guardian's Artifact Registry scanner runs on every scan cycle and flags repositories that exceed configurable thresholds. When a repository has more than 100 image versions, Cloud Guardian:

  1. Identifies the repository and counts the total number of image versions across all packages
  2. Estimates the storage cost based on image sizes and current pricing
  3. Creates a remediation action to add cleanup policies or delete old versions directly
  4. If auto-remediation is enabled with the artifact_registry:cleanup scope, it deletes untagged and old versions automatically (capped at 50 versions per cycle to avoid API rate limits)
  5. For teams using Terraform, generates a PR adding cleanup policies to the repository resource

The key advantage of automated scanning is coverage. You do not need to remember which projects have Artifact Registry repositories, or which repositories lack cleanup policies. Cloud Guardian discovers repositories automatically through the GCP APIs and flags any that are accumulating storage costs unnecessarily.

For repositories where direct deletion is risky, Cloud Guardian can operate in PR mode. When a project has a linked GitHub repository and a GitHub App installation, remediation actions generate pull requests that add Terraform cleanup policies. The PR includes the exact Terraform configuration shown in the previous section, customized for the specific repository.

Best Practices for Image Retention

A good retention strategy balances cost savings against the need to roll back to previous versions. Here are the practices we recommend after analyzing hundreds of GCP projects:

  • Keep the last 5-10 tagged versions. This gives you enough rollback headroom for most incidents. If you need to roll back further than 10 versions, you likely have a deeper problem that a container image swap will not fix.
  • Delete untagged images after 7 days. Untagged images are typically intermediate build layers or images that were superseded by a newer push to the same tag. They are almost never needed after a few days.
  • Set a hard ceiling of 90 days for all images. Any image older than 90 days is extremely unlikely to be deployed again. If regulatory requirements mandate longer retention, use a separate archival repository with cheaper storage.
  • Tag your production images explicitly. Use tags like v1.2.3 or prod-2026-03-13 so that cleanup policies can distinguish between images that matter and images that do not. Avoid relying solely on latest as it provides no rollback capability.
  • Use immutable tags for deployed images. Artifact Registry supports immutable tags that prevent overwriting. Enable this on production repositories to ensure that a tag always points to the same digest.
  • Audit storage monthly. Add a monthly review of Artifact Registry storage costs to your FinOps cadence. Look for repositories with disproportionate storage relative to their importance.

A common anti-pattern is setting overly conservative retention policies out of fear of deleting something important. In practice, container images older than 30 days are almost never used. The cost of keeping them indefinitely far outweighs the extremely low probability that you will need a specific old version.

Automate Your Artifact Registry Cleanup

Cloud Guardian scans every Artifact Registry repository across your GCP projects. It detects repositories with hundreds of unused image versions, estimates your wasted storage costs, and cleans them up automatically or via Terraform pull requests. Connect your project and stop paying for images you will never use again.

Get Started Free