Whoa! Really? Seems like modern software releases would be excited to achieve 98% with the incredible amount of bug fixes/patches released very quickly after the massive beta test known as release day.
Code has to be 100% correct or else it’s considered a bug (assuming it’s syntactically valid).
Code that is 98% correct is actually much worse than no code at all. That’s the kind of code that will introduce subtle, systemic faults, and years later result in catastrophic failure when the company realizes they’ve been calculating millions of payments without sales tax, or a clever hacker discovers they can get escalated permissions by passing a well-crafted request etc.
What's your metric for the "percentage the code is wrong"? Is it how many lines of code were wrong, or how many test cases the code fails?
Presumably if AI-generated code passes every test case, but would fail on edge cases that some human programmer(s) did not anticipate in their suite of tests, the humans potentially might have made similar coding mistakes as the AI if they had had to personally write the code.
Whoa! Really? Seems like modern software releases would be excited to achieve 98% with the incredible amount of bug fixes/patches released very quickly after the massive beta test known as release day.