I wouldn’t be surprised if it’s technically true but it’s more like, coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code. Like same shit as Github’s CoPilot that came out years ago, nothing special at all
None of that 25% of AI generated code wasn’t very heavily initiated and carefully crafted by humans every single time to ensure it actually works
It’s such a purposeful misrepresentation of labour (even though the coders themselves all want to automate away and exploit the rest of the working class too)
coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code.
When you dig past the clickbait articles and find out what he actually said, you’re correct. He’s jerking himself off about how good his company’s internal autocomplete is.
I’m not going to read it but I bet it’s nowhere near as good as he thinks it really is
I wouldn’t be surprised if the statistics on “AI generated code” was like, I type 10 characters, I let AI autocompleted the next 40 characters, but then I have to edit 20 of those characters, and the AI tool counts “40 characters” as “AI generated” since that was what was accepted
Not to mention since it’s probably all trained on their own internal codebase and there’s a set certain coding style guide, it’d probably perform way worse for general coding if people weren’t all trying to code following the exact same patterns, guidelines, and libraries,
I assume that’s what it is as well. I’m guessing there’s also a lot of boilerplate stuff and they’re counting line counts inflated by pointless comments, and function comment templates that usually have to get fully rewritten.
Lol I really doubt that.
I wouldn’t be surprised if it’s technically true but it’s more like, coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code. Like same shit as Github’s CoPilot that came out years ago, nothing special at all
None of that 25% of AI generated code wasn’t very heavily initiated and carefully crafted by humans every single time to ensure it actually works
It’s such a purposeful misrepresentation of labour (even though the coders themselves all want to automate away and exploit the rest of the working class too)
When you dig past the clickbait articles and find out what he actually said, you’re correct. He’s jerking himself off about how good his company’s internal autocomplete is.
I’m not going to read it but I bet it’s nowhere near as good as he thinks it really is
I wouldn’t be surprised if the statistics on “AI generated code” was like, I type 10 characters, I let AI autocompleted the next 40 characters, but then I have to edit 20 of those characters, and the AI tool counts “40 characters” as “AI generated” since that was what was accepted
Not to mention since it’s probably all trained on their own internal codebase and there’s a set certain coding style guide, it’d probably perform way worse for general coding if people weren’t all trying to code following the exact same patterns, guidelines, and libraries,
I assume that’s what it is as well. I’m guessing there’s also a lot of boilerplate stuff and they’re counting line counts inflated by pointless comments, and function comment templates that usually have to get fully rewritten.