Regrettably we still have some snapshot tests in our code base, yes. I cringe every time one goes red and I'm supposed to check them. Like you say, I eyeball them and after the fourth that's the same pattern I give up and just regenerate them. Meaning they might as well not be there coz they won't catch any actual bugs for me.
We try to replace them every time we come across one that needed adjusting actually. Quick is bad here. And yes they're flaky as hell if you use them for everything. Even a tiny change to just introduce a new element that's supposed to be there can change unrelated parts of snapshots because of generated names in many places.
Asserting on the important parts of some json output is not generally more expensive at all. You let the code run to generate the equivalent of a snapshot and then paste it in the assertion(s) and adjust as necessary. Yes it takes more time than a snapshot. But optimizing for time at that end is the wrong optimization. You're optimizing one devs time expenditure while making both the reviewers' and later devs' and reviewers' time expenditure larger (if they want to do a proper job instead of eyeballing and then YOLOing it).
As I see it, devs using snapshots are the opposite of a 10x dev. It's being a 0.1x dev. Thanks but no thanks.
>We try to replace them every time we come across one that needed adjusting actually. Quick is bad here. And yes they're flaky as hell if you use them for everything. Even a tiny change to just introduce a new element that's supposed to be there can change unrelated parts of snapshots because of generated names in many places.
If you can't keep the flakiness under control then yeah, they'll be worse than useless because they will fail for no discernible reason at all.
Oh the reasons are discernable. I call it flaky when you make an unrelated change and the snapshots change. You go check why and all you can do is a facepalm. What you and I call "unrelated" may be different. Such as when I make a CSS change that simply affects some generated class names for example and a bunch of snapshots fail. This will be worse in code bases with lots of reusable CSS of course, i.e. your blast radius for flakiness will be much larger the more CSS reuse you have and the more snapshot tests you have. Ours is very controllable but only because we're doing the right things (such as reducing snapshot use).
That's when you start cursory looks at the first few changes and then just regenerate them, which means they will never find any actual bugs coz you ignore them.
We try to replace them every time we come across one that needed adjusting actually. Quick is bad here. And yes they're flaky as hell if you use them for everything. Even a tiny change to just introduce a new element that's supposed to be there can change unrelated parts of snapshots because of generated names in many places.
Asserting on the important parts of some json output is not generally more expensive at all. You let the code run to generate the equivalent of a snapshot and then paste it in the assertion(s) and adjust as necessary. Yes it takes more time than a snapshot. But optimizing for time at that end is the wrong optimization. You're optimizing one devs time expenditure while making both the reviewers' and later devs' and reviewers' time expenditure larger (if they want to do a proper job instead of eyeballing and then YOLOing it).
As I see it, devs using snapshots are the opposite of a 10x dev. It's being a 0.1x dev. Thanks but no thanks.