At cloud prices, TPUs are cheaper per FLOP but have much worse library support, leading to much higher upfront engineering costs - and you're locked into Google's cloud.
On the other hand, essentially every ML project works out-the-box with nvidia GPUs. There's still vendor lock-in to nvidia, but it's more palatable.
If you spend $100k of an ML engineer's time to get FooNet to work on TPU, then the cutting edge advances or you pivot and instead you need BarNet support - you might wish you'd spent that $100k just buying a stack of nvidia GPUs.
But also the cost per FLOP will/has/should come down aggressively over time for nvidia, whereas I doubt Google will do the same for TPUs (as they have lock in).
Also the hyperscalers as per usual are far more expensive than others - this is an incomplete list https://getdeploying.com/reference/cloud-gpu/nvidia-h100 - GCP seems to be around the $100/hour for the 8xH100 config (similar to AWS).
I just suspect overall nvidia gpu prices will go down quicker (across the entire market) than more proprietary ones. I could be wrong - but I don't think Google will be want to compete with the general market on a FLOPS/$ metric (they are already way more expensive than cheaper providers) so will end up milking the locked in users.