This last month I decided to try the Jetbrain equivalent of Cursor, for their IDEs (https://www.jetbrains.com/ai/). It's a pluging well integrated in the code editor that you can easily summon.
I work in Rust and I had to start working with several new libraries this month. One example of them is `proptest-rs`, a rust property testing library that defines a whole new grammar to define the tests. I am 100% sure that I spent much less time to get on-boarded with the librariy's best practices and usages. I just quickly went through their book (to learn the vocabulary) and asked the AI to generate the code itself. I was very surprised that it did not do any mistakes, considering that sort of weird custom grammar of the lib. I will at least keep trying for another months.
How do you know that it didn’t make any mistakes, which you wouldn’t make if you learned the usage of that library without AI? Even before AI generated code, people made mistakes about which they didn’t know, because they never read the documentation for example, and it “worked”… except the unintended side effects of course. Adding an AI layer into the picture makes this definitely worse.
It's not that you ask it to write 200 lines of code at once, and blindly trust it. It's more that you start to use the lib, ask it to generate one helper method at the time, for an isolated task. Which leave you time to "review" the code that it wrote properly. Even when a human writes code, it needs to go through peer-review. So the exact same applies with AI. It's the job of the reviewer (in this case, the one who invokes the AI) to make sure that the one who wrote the code does not do mistake, which can include going to read the doc in more detail.
You need to know the used methods to review either way (when you really review code, which is not done by most people). You need to read the doc either way. At that point you could just write the code yourself. And as many examples showed (e.g. https://news.ycombinator.com/item?id=41307387), it’s not even that quick.
I work in Rust and I had to start working with several new libraries this month. One example of them is `proptest-rs`, a rust property testing library that defines a whole new grammar to define the tests. I am 100% sure that I spent much less time to get on-boarded with the librariy's best practices and usages. I just quickly went through their book (to learn the vocabulary) and asked the AI to generate the code itself. I was very surprised that it did not do any mistakes, considering that sort of weird custom grammar of the lib. I will at least keep trying for another months.