← All posts
awsreinventeventscloud

AWS re:Invent 2024: Two Real Things and a Lot of Noise

I sat through more re:Invent keynotes than I care to admit. Most of it was repackaging. Two announcements actually matter for the work I do.

20 January 2025·4 min read

re:Invent 2024 was a lot of keynote and not much breakthrough. AWS shipped roughly 200 announcements across the week. About 180 of them are repackaging, integration polish, or "it has AI now" rebranding. Two are genuinely worth your time.

This is what survives the noise.

The two real things

1. S3 Tables

S3 Tables is S3 with native Iceberg support, plus background optimisation that handles compaction and snapshot management for you. It is a proper data lakehouse primitive at S3 prices, with the operational characteristics of a managed service.

This is a big deal. Iceberg has been the de facto open table format for two years. Until now, running it in production meant either a vendor (Tabular, Snowflake, Databricks) or running your own compaction jobs and praying. S3 Tables handles the maintenance burden directly.

What I expect this to do over the next year:

  • Kill or absorb a chunk of the open table format vendor market. Tabular got bought by Databricks earlier in 2024 anyway. The remaining independent vendors have a thinner story.
  • Make Iceberg the default for new analytical workloads on AWS. Athena, EMR, and Glue all integrate.
  • Start a slow migration of existing Parquet-on-S3 data lakes to table-format-on-S3. The migration is non-trivial but the operational win is real.

If you run analytics on AWS, look at S3 Tables this quarter. If you do not, ignore.

2. Trainium 2 in serious production form

Trainium 2 has been talked about for over a year. At re:Invent it actually shipped in usable instance types with credible price-performance for training and inference. The bigger story is the Trn2 UltraServer setup with 64 chips wired together for very large model training.

The headline is not "AWS competes with NVIDIA". They do not, head to head. The headline is that for inference workloads on existing models, Trainium 2 is now cheap enough and capable enough that the calculus changes. For specific frontier-adjacent workloads it is competitive.

The other thing nobody is reading: AWS is partnering with Anthropic on a giant Trainium-based training cluster. That is the strategic bet. AWS does not need to win the chip wars. They need a credible non-NVIDIA option for hyperscale customers and a marquee model partner that uses it. Both are now real.

For most engineering teams this changes nothing today. By 2026 it might change everything about your inference cost structure.

The "AI" rebranding tier

A lot of the announcements are existing services with a Bedrock integration bolted on. RDS, Glue, OpenSearch, you name it, they all have an AI button now.

Some of this is genuinely useful. Bedrock Agents is improving slowly. Bedrock Guardrails is meaningfully better than building safety filters yourself. The Q tooling for Connect (call centre) is real productivity for that specific vertical.

Most of it is noise. If you do not have an active use case for "I want this database to suggest queries to me", you are not going to develop one because AWS shipped a button.

What surprised me

A few smaller announcements that did not get the keynote time but matter:

  • EKS Auto Mode. AWS finally shipped a "you don't have to manage the data plane either" mode for EKS. This is GKE Autopilot for EKS, and overdue. For teams that wanted Kubernetes without the operational tax, this is now a real option.
  • Aurora DSQL. A distributed PostgreSQL-compatible service with active-active multi-region writes. The technical claims are bold. I am withholding judgement until people run real workloads on it. If they hold up, this changes the math on global multi-region apps.
  • CloudWatch Logs improvements. Long-overdue ergonomics. Search across log groups. Better integration with traces. Honest small wins.

What was conspicuously absent

No major networking simplification. The AWS networking surface remains a baroque cathedral of VPCs, subnets, route tables, transit gateways, PrivateLink endpoints, RAM shares, and one Direct Connect for spice. Every year I hope re:Invent will simplify this and every year it does not.

No serious answer to Cloudflare in the edge layer. Lambda@Edge and CloudFront Functions are fine. They are not what Cloudflare Workers are. AWS has decided to lose the edge developer story to Cloudflare and that is increasingly visible.

No Graviton 5 announcement, which suggests the cadence has slowed slightly. Graviton 4 is still rolling out across instance families. That is fine but worth noting.

What I am telling clients

Three concrete actions off the back of re:Invent 2024:

  1. Look at S3 Tables for any new analytical workload. Skip the migration discussion for existing data lakes unless the operational pain is real.
  2. Watch Trainium 2 pricing into Q1 2025. If you run a lot of inference and your current spend is dominated by GPU on-demand, there will be a real arbitrage opportunity once supply ramps.
  3. Re-evaluate EKS Auto Mode for any cluster that is currently a small platform team's full-time job. The savings in operational hours are likely larger than the price premium.

The rest you can read on the AWS news blog when you have time. Most of it will not change anything for you.