VPC Lattice: Cross-Account Connectivity Without the VPC Peering Mess
AWS Services You’re Not Using — Issue #1
You have two AWS accounts. One runs your data science workloads. The other runs your application cluster. A service in account A needs to call a service in account B.
The classic playbook: VPC peering, Transit Gateway, PrivateLink. All of them work. All of them come with routing tables, CIDR management, and the kind of Terraform that makes you question your career choices.
VPC Lattice does it differently. And after setting it up in production, I’m convinced most multi-account AWS shops should be using it.
The Problem
Here’s the setup. Two accounts, two EKS clusters, one dependency:
- Account A (Data Science) runs ML workloads on an EKS cluster managed by a third-party platform
- Account B (Development) runs application services, including a service we’ll call
lagostaon port 3434 behind a shared internal ALB
The data science team needed to call lagosta from their pods. Simple, right?
With VPC peering, you’d need overlapping CIDR checks, route table updates in both VPCs, security group rules referencing cross-account CIDR blocks, and DNS resolution across accounts. For one service call.
Enter VPC Lattice
VPC Lattice is a managed application networking service that operates at Layer 7. Think of it as a service mesh that works across VPCs and accounts without touching your network layer.
The key concepts:
- Service Network — a logical grouping that VPCs join. Think of it as a shared namespace.
- Service — an endpoint within the network, with listeners and target groups (similar to ALB concepts)
- VPC Association — connects a VPC to a service network as either a producer or consumer
- RAM Share — uses AWS Resource Access Manager to share the service network across accounts
(Data Science)"] Workload["Consumer workload
curl https://<lattice-dns>/..."] end subgraph B["Account B — Producer"] EKS_B["EKS Cluster
(Development)"] ALB["Shared internal ALB"] Pods["lagosta pods
port 3434"] ALB --> Pods end Workload -- "VPC Lattice
HTTPS:443" --> ALB
No peering. No Transit Gateway. No CIDR overlaps to worry about. The consumer just calls a Lattice-generated DNS name over HTTPS.
The Terraform
Producer Side (Account B)
The producer account creates the service network, the service itself, and shares it via RAM:
resource "aws_vpclattice_service_network" "main" {
name = "lagosta"
auth_type = "NONE" # or AWS_IAM for auth policies
}
resource "aws_vpclattice_service" "lagosta" {
name = "lagosta"
auth_type = "NONE"
}
resource "aws_vpclattice_target_group" "alb" {
name = "lagosta-alb"
type = "ALB"
config {
port = 443
protocol = "HTTPS"
vpc_identifier = data.aws_vpc.development.id
}
}
resource "aws_vpclattice_target_group_attachment" "alb" {
target_group_identifier = aws_vpclattice_target_group.alb.id
target {
id = data.aws_lb.shared_alb.arn
port = 443
}
}
resource "aws_vpclattice_listener" "https" {
name = "https"
service_identifier = aws_vpclattice_service.lagosta.id
protocol = "HTTPS"
port = 443
default_action {
forward {
target_groups {
target_group_identifier = aws_vpclattice_target_group.alb.id
weight = 100
}
}
}
}
resource "aws_vpclattice_service_network_service_association" "main" {
service_identifier = aws_vpclattice_service.lagosta.id
service_network_identifier = aws_vpclattice_service_network.main.id
}
The producer also needs to associate its VPC and allow Lattice traffic to reach the ALB:
resource "aws_vpclattice_service_network_vpc_association" "producer" {
service_network_identifier = aws_vpclattice_service_network.main.id
vpc_identifier = data.aws_vpc.development.id
security_group_ids = [aws_security_group.lattice_producer.id]
}
# Allow VPC Lattice prefix list to reach the ALB
resource "aws_vpc_security_group_ingress_rule" "alb_from_lattice" {
security_group_id = data.aws_security_group.shared_alb.id
prefix_list_id = data.aws_ec2_managed_prefix_list.vpc_lattice.id
ip_protocol = "tcp"
from_port = 443
to_port = 443
}
Pro tip: AWS publishes a managed prefix list for VPC Lattice in each region. Use it instead of hardcoding CIDR ranges.
Sharing via RAM
resource "aws_ram_resource_share" "lattice" {
name = "lagosta-lattice"
allow_external_principals = true
}
resource "aws_ram_resource_association" "lattice" {
resource_arn = aws_vpclattice_service_network.main.arn
resource_share_arn = aws_ram_resource_share.lattice.arn
}
resource "aws_ram_principal_association" "ds_dev" {
principal = "111122223333" # consumer account ID
resource_share_arn = aws_ram_resource_share.lattice.arn
}
Consumer Side (Account A)
The consumer accepts the RAM share and associates its VPC:
resource "aws_ram_resource_share_accepter" "lattice" {
share_arn = "arn:aws:ram:eu-central-1:444455556666:resource-share/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
resource "aws_vpclattice_service_network_vpc_association" "consumer" {
service_network_identifier = data.aws_vpclattice_service_network.main.id
vpc_identifier = data.aws_vpc.workload.id
security_group_ids = [aws_security_group.lattice_consumer.id]
}
That’s it on the consumer side. Once the VPC association is active, any workload in that VPC can reach the Lattice service via its generated DNS name.
The Host Header Gotcha
This is the part that’ll cost you an afternoon if you don’t know about it.
Our ALB is shared across multiple Kubernetes services. It uses host-based routing — each Ingress resource matches a specific hostname. The lagosta Ingress only responds to lagosta.development.internal.example.com.
But VPC Lattice gives you a different DNS name: lagosta-0dfb0067f5c2075e3.7d67968.vpc-lattice-svcs.eu-central-1.on.aws
When Lattice forwards the request to the ALB, it preserves the original Host header. If you curl the Lattice DNS directly, the ALB sees that Lattice DNS as the Host header, matches nothing, and returns a 404.
The fix: consumers must explicitly set the Host header:
curl -H "Host: lagosta.development.internal.example.com" \
https://lagosta-0dfb0067f5c2075e3.7d67968.vpc-lattice-svcs.eu-central-1.on.aws/healthcheck
In application code, this means configuring your HTTP client to override the Host header. It’s one line of code, but it’s non-obvious.
Why ALB Targets, Not IP Targets
We evaluated three approaches:
| Approach | Pros | Cons | Verdict |
|---|---|---|---|
| ALB target | Stable endpoint, no IP management | Host header required, no health checks | ✅ Winner |
| Pod IP target | Direct to pod | IPs change on every restart | ❌ Rejected |
| NodePort + Node IPs | Health checks work | IPs change on node replacement | ❌ Rejected |
For ALB-type target groups, VPC Lattice does not support health checks. The target status will permanently show UNAVAILABLE with reason HealthCheckNotSupported. This is expected and documented — traffic flows regardless.
It looks alarming in the console. It’s fine.
Lessons That Hurt
The Terraform Destroy Deadlock
If you have a Lattice listener forwarding to a target group and you try to terraform destroy the target group, Terraform hangs for 20+ minutes. The API won’t delete a target group that’s still referenced by a listener’s default action.
The workaround: manually switch the listener to a fixed response before destroying:
aws vpc-lattice update-listener \
--service-identifier svc-xxxxxxxxxxxxxxx \
--listener-identifier listener-xxxxxxxxxxxxxxx \
--default-action '{"fixedResponse":{"statusCode":404}}'
Then the target group deletes cleanly. This should be handled in Terraform’s resource dependency graph, but as of early 2026, it isn’t.
Consumer VPC Must Be the Workload VPC
This sounds obvious in hindsight: the VPC association must be on the VPC where your pods actually run. In our case, the data science account had two VPCs — a default one and one managed by a third-party platform. The workloads ran on the platform VPC, not the default. Associating the wrong VPC means the DNS resolves but connections time out silently.
When to Use VPC Lattice
Good fit:
- Cross-account service-to-service calls (the sweet spot)
- Multi-VPC architectures where peering/Transit Gateway is overkill
- Teams that want service-level access control without network-level complexity
- Gradual migration from monolith to services across accounts
Not ideal:
- High-throughput, latency-sensitive data plane traffic (adds a hop)
- Simple same-VPC service discovery (use Kubernetes DNS or Cloud Map)
- If you need full L4 control (Lattice is L7 only — HTTP/HTTPS/gRPC)
The Bottom Line
VPC Lattice replaces a stack of networking primitives (peering, TGW, PrivateLink, Route 53 Resolver rules) with a single abstraction that works at the application layer. It took us about two days to set up, including wrong turns, compared to the week we’d budget for a Transit Gateway approach with proper CIDR planning.
The service is GA, the Terraform provider supports it fully, and the pricing is straightforward (per-hour for service + per-GB data processed).
If you’re running multiple AWS accounts and services need to talk across account boundaries, VPC Lattice should be your first option, not your last.
This is issue #1 of “AWS Services You’re Not Using” — a series about overlooked AWS services I actually run in production. Next up: AWS Chatbot + SNS — how to build a scalable Slack notification system for multiple teams with Terraform, and why “one Chatbot config per channel” will ruin your day.