The only thing that's a given is that it's possible that you'll end up paying child support and alimony to someone that hates you, if you marry in real life and have real children.
Choosing you'd rather have AI wife and kids, rather than deal with the potential that many real people will face of paying child support and alimony to someone that hates them -- I don't see as an irrational decision (although not an inevitable one either).
In fact, if you don't consider at least the possibility, you are a fool.
But you're not actually having this wife and children if they're AI - they're an illusion, a simulation that only superficially resembles the real thing. It's a poor substitute and you out of all people should know that.
> I don't see as an irrational decision
Here's where it's irrational: the mere possibility is not enough to abandon the prospect because if one wanted to be consistent, they'd have to avoid similar risks, so ultimately hardly do anything.
Risks are an inherent part of life and those who avoid them at all costs are universally and consistently miserable.
It comes to personal risk appetite, and risk benefit analysis. My claim in any case is it's better to have an AI spouse and child than being relegated to a mere bank account for an ex spouse that hates you. Maybe in that case it's still worth it so you can produce a child you don't see with someone that hates you, so I'll concede that might be a point of contention. In any case, I make no claim that those are the only possibilities, I merely compare the two.
I haven't presupposed I can make the decision for any particular person.
> It comes to personal risk appetite, and risk benefit analysis.
What analysis when you don't even have a good estimate of the probabilities involved? Min-maxing on the other hand is a recipe for extinction as a species, so not a viable strategy.
> My claim in any case is it's better to have an AI spouse and child than being relegated to a mere bank account for an ex spouse that hates you.
>What analysis when you don't even have a good estimate of the probabilities involved?
Without some analysis, the decision would be irrational. What you're alluding here by discounting a rational analysis but also discounting (in your prior comment) irrationality is that you've set a clever fallacious (and ultimately circular) trap where being rational is irrational and being irrational is rational, therefore you have produced clever gotcha where any counter argument loses. Of course all the meanwhile, appealing to the risk/benefit factor of the probability of extinction of the specious.
>Min-maxing on the other hand is a recipe for extinction as a species, so not a viable strategy
This doesn't come into play on the contextual claim, as there are plenty enough people reproducing in jurisdictions with no effective alimony or child support to continue the species, if somehow everyone were to come to such a decision, which in any case doesn't seem to be a given.
>How specifically is it better?
I suppose I could play Socratic method here as well, and ask why it's better that the species doesn't go extinct. There is no way to objectively prove that's true, so I'll yield we're both making assertions that are subjectively better rather than by some universal law of the universe that it's better off to be under constant threat of imprisonment by the state if you can't come up with enough an extra 20%+ of (imputed, not even actual, so theoretically could be above 100%) income and be hated rather than enjoy some entertainment with AI.
1. Not a given.
2. Something one can work on so that they're either more likeable or at the very least less defeatist about the whole thing.