The LLM is the coding tool, not the arbiter of outcome.
A human’s ability to assess, interrogate, compare, research, and develop intuition are all skills that are entirely independent of the coding tool. Those skills are developed through project work, delivering meaningful stuff to someone who cares enough to use it and give feedback (eg customers), making things go whoosh in production, etc etc.
This is a XY problem and the real Y are galaxy brains submitting unvalidated and shoddy work that make good outcomes harder rather than easier to reach.
A human’s ability to assess, interrogate, compare, research, and develop intuition are all skills that are entirely independent of the coding tool. Those skills are developed through project work, delivering meaningful stuff to someone who cares enough to use it and give feedback (eg customers), making things go whoosh in production, etc etc.
This is a XY problem and the real Y are galaxy brains submitting unvalidated and shoddy work that make good outcomes harder rather than easier to reach.