• xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    202
    arrow-down
    11
    ·
    7 months ago

    ChatGPT is hilariously incompetent… but on a serious note, I still firmly reject tools like copilot outside demos and the like because they drastically reduce code quality for short term acceleration. That’s a terrible trade-off in terms of cost.

    • ToothlessFairy@lemmy.world
      link
      fedilink
      arrow-up
      125
      arrow-down
      1
      ·
      7 months ago

      I enjoy using copilot, but it is not made to think for you. It’s a better autocomplete, but don’t ever let it do more than a line at once.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 months ago

          As a software engineer, the number of people I encounter in a given week who either refuse to or are incapable of understanding that distinction baffles and concerns me.

      • takeda@lemmy.world
        link
        fedilink
        arrow-up
        47
        arrow-down
        2
        ·
        7 months ago

        The problem I have with it is that all the time it saves me I have to use on reading the code. I probably spend more time on that as once in a while the code it produces is broken in a subtle way.

        I see some people swearing by it, which is the opposite of my experience. I suspect that if your coding was copying code from stack overflow then it indeed improved your experience as now this process is streamlined.

        • AeroLemming
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          7 months ago

          I use Codeium and I’ve found it helpful for things like guessing what the next line is if it’s similar to the line I just wrote, but terrible if I’m thinking about how to actually solve a problem and it keeps suggesting wrong answers that make me think about them, realize they’re wrong, forget the solution I was working on in my head before that.

          • oce 🐆@jlai.lu
            link
            fedilink
            arrow-up
            2
            ·
            7 months ago

            I don’t know if it does yet, but if ChatGPT starts providing source for every information, then it would make it much faster to find the relevant information and check their sources, rather than clicking websites one by one.

            • tourist@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              Yep, ChatGPT4 allows optional calls to Bing now.

              It used to have a problem with making a claims that were not relevant to or contradicted its own sources, but I don’t recall encountering that problem recently.

    • stjobe@lemmy.world
      link
      fedilink
      arrow-up
      57
      arrow-down
      1
      ·
      7 months ago

      Biggest problem with it is that it lies with the exact same confidence it tells the truth. Or, put another way, it’s confidently incorrect as often as it is confidently correct - and there’s no way to tell the difference unless you already know the answer.

      • Swedneck@discuss.tchncs.de
        link
        fedilink
        arrow-up
        21
        arrow-down
        2
        ·
        7 months ago

        it’s kinda hilarious to me because one of the FIRST things ai researchers did was get models to identify things and output answers together with the confidence of each potential ID, and now we’ve somehow regressed back from that point

        • did we really regress back from that?

          i mean giving a confidence for recognizing a certain object in a picture is relatively straightforward.

          But LLMs put together words by their likeliness of belonging together under your input (terribly oversimplified).the confidence behind that has no direct relation to how likely the statements made are true. I remember an example where someone made chatgpt say that 2+2 equals 5 because his wife said so. So chatgpt was confident that something is right when the wife says it, simply because it thinks these words to belong together.

            • 𝕽𝖔𝖔𝖙𝖎𝖊𝖘𝖙@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              Gödel numbers are typically associated with formal mathematical statements, and there isn’t a formal proof for 2+2=5 in standard arithmetic. However, if you’re referring to a non-standard or humorous context, please provide more details.

              • metaStatic@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago

                Of course I don’t know enough about the actual proof for it to be anything but a joke but there are infinite numbers so there should be infinite proofs.

                there are also meme proofs out there I assume could be given a Gödel number easily enough.

    • DudeDudenson@lemmings.world
      link
      fedilink
      arrow-up
      40
      arrow-down
      1
      ·
      edit-2
      7 months ago

      they drastically reduce code quality for short term acceleration.

      Oh boy do I have news for you, that’s basically the only thing middle managers care about, short tem acceleration

    • Poggervania@kbin.social
      link
      fedilink
      arrow-up
      38
      arrow-down
      1
      ·
      7 months ago

      But LinkedIn bros and corporate people are gonna gobble it up anyways because it has the right buzzwords (including “AI”) and they can squeeze more (low quality) work from devs to brag about how many things they (the corporate owners) are doing.

      • lurch@sh.itjust.works
        link
        fedilink
        arrow-up
        39
        arrow-down
        2
        ·
        7 months ago

        It’s just a fad. There’s just a small bit that will stay after the hype is gone. You know, like blockchain, AR, metaverse, NFT and whatever it was before that. In a few years there will be another breakthrough with it and we’ll hear from it again for a short while, but for now it’s just a one trick pony.

                • FellowEnt@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  7 months ago

                  I should’ve elaborated… It’s supposed to be a vast seamless network of interconnected spaces isn’t it, but that doesn’t exist yet. I think they jumped the gun in that everyone thought Horizons was it. It’s only just starting to be built.

            • saltesc@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              I actually had ‘was’ first then edited after Googling. It appears it is still a thing but can’t really tell for what.

          • Swedneck@discuss.tchncs.de
            link
            fedilink
            arrow-up
            14
            arrow-down
            3
            ·
            7 months ago

            The hilarious thing about blockchain is that the core concept is actively making the whole thing worse. The matrix protocol is sort of essentially blockchain without the decentralized ledger part, and it’s so vastly superior in every single way.

            NFTs just show how fundamentally dumb blockchains are, if you skip the decentralized ledger bit then you never need to invent NFT functionality in the first place…

              • Swedneck@discuss.tchncs.de
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago

                i’m not sure whether matrix uses that, what i’m talking about is how it does the things everyone finds neat about blockchains without the inherent downsides like massive power usage and EVERYONE having to replicate the ENTIRE ledger.

                i know IPFS uses merkle trees though, and hilariously blockchains largely rely on that to actually store any significantly sized data.

          • TwilightKiddy@programming.dev
            link
            fedilink
            arrow-up
            11
            arrow-down
            2
            ·
            7 months ago

            Well, if you stop listening to people who think it’s a way to get really rich really fast (which it obviously isn’t), cryptocurrencies are quite useful. International transfers are so much cheaper and easier with them.

          • lurch@sh.itjust.works
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            edit-2
            7 months ago

            Yes, you could, for example, use it to manage who is allowed to park in a garage, anonymously. The owner of a parking spot NFT can unlock the door from the outside. Stuff like that.

            However, it’s also possible to do that with a small web application. Just payments and transfer of the parking spots are less free and it’s not decentralized.

          • Riven@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            7 months ago

            The proton vpn people are either using or working on using block chain as a sort of email verification. Iirc it won’t have any cost or change in usage to the consumer, just an added layer of security. I’m not smart enough to understand how but it sounds neat.

        • Fubber Nuckin'@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          5
          ·
          7 months ago

          I disagree, because unlike those things, ai actually has a use case. It needs a human supervisor and it isn’t always faster, but chat gpt has been the best educational resource I’ve ever had next to YouTube. It’s also decent at pumping out a lot of lazy work and writing so i don’t have to, or helping me break down a task into smaller parts. As long as you’re not expecting it to solve all your problems for you, it’s an amazing tool.

          People said the same things about 3d printing and yeah, while it can’t create literally everything at industrial scale, and it’s not going to see much consumer use, it has found a place in certain applications like prototyping and small scale production.

          • Catoblepas@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            9
            arrow-down
            2
            ·
            7 months ago

            ChatGPT is OK at summarizing popular, low specificity topics that tons of people have already written a ton about, but it’s terrible at anything else. When I tested its knowledge about the process of a niche interest of mine (fabric dyeing) it skipped completely over certain important pieces of information, and when I prompted it to include them it basically just mirrored my prompt back at me.

            Which has pretty much summed up my ChatGPT experience: it just regurgitates stuff I can find myself, but removes the ability to determine if the source is reliable. And if it’s something I’m already having trouble finding detailed information about it usually doesn’t help.

          • tryptaminev 🇵🇸 🇺🇦 🇪🇺@feddit.de
            link
            fedilink
            arrow-up
            6
            arrow-down
            2
            ·
            7 months ago

            i’d argue that chat gpt is mostly great at taking human bullshit tasks from humans, who dontwant to dothe bullshit, like regurgitating the text from a textbook in different words, writing cover letters for job applications, that are often machine analyzed for buzzwords anyways.

            So its use case only exist in the domain of bullshit tasks that only exist to occupy two people without any added value.

            • Fubber Nuckin'@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              3
              ·
              7 months ago

              Yeah, that is one thing it can do, but it’s not the only thing it does. I’m not sure how to get my point across well but, just because it gives you the wrong answer 25% of the time doesn’t mean it’s useless. In whatever you ask it to do, it often gets you most of the way there as long as you know how to correct it or check its work. The ability to ask specific or follow-up questions when learning something makes it invaluable as a learning tool (if you’re teaching yourself that is (ie. If you’re a university student)). It’s also very useful when brainstorming ideas or helping you approach a problem from a different angle. I can also ask it questions that are far more specific than what a search engine would get me.

              It really comes down to if the human operator knows how (and when) to use it properly.

              • nogrub@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                7 months ago

                i never trust that information id rather learn with a book videos or just from websites if i ever use it i always fact check it. never blindly trust a computer

                • Fubber Nuckin'@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  7 months ago

                  That was an integral part to this whole thing, you always fact check the ai. I said this in both of my comments.

                  I totally get preferring textbooks or videos though. I just find that the ai saves me time since i can ask specific questions about things, and it often gives me concise information that i understand more quickly.

      • EatYouWell@lemmy.world
        link
        fedilink
        arrow-up
        18
        arrow-down
        1
        ·
        7 months ago

        Yeah, they think it can turn a beginner dev into an advanced dev, but really it’s more like having a team of beginner devs.

        • Scew@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 months ago

          It’s alright for translation. As an intermediate dev, being able to translate knowledge into languages I’m not as familiar with is nice.

    • PlexSheep@feddit.de
      link
      fedilink
      arrow-up
      34
      ·
      edit-2
      7 months ago

      I’m still convinced that GitHub copilot is actively violating copyleft licenses. If not in word, then in the spirit.

    • TonyTonyChopper@mander.xyz
      link
      fedilink
      arrow-up
      31
      arrow-down
      3
      ·
      7 months ago

      they drastically reduce … quality for short term acceleration

      Western society is built on this principle

      • CanadaPlus@futurology.today
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        Sort of. Nobody’s cutting corners on aviation structural components, for example. We’ve been pretty good at maximizing general value output, and usually that means lower quality, but not always.

      • EatYouWell@lemmy.world
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        7 months ago

        It’s helped me a bit with resolving weird tomcat/Java issues when upgrading to RHEL8, though. It didn’t give me an answer, but it gave me ideas on where to look (in my case I didn’t realize fapolicyd replaced selinux)

        • noobdoomguy8658@feddit.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          That’s the point - you have the expertise to make proper sense of whatever it outputs. The people pushing for “AI” the most want to rely on it without any necessary expertise or just minimal efforts, like feeding it some of your financial reports and have generate a 5-year strategy only to fail miserably and have no one to blame this time (will still blame anyone else but themselves btw).

          It’s not the most useless tool in the world by any means, but the mainstream talk is completely out of touch with reality on the matter, and so are mainstream actions (i.e. overrelying on it and putting way too much faith into it).

      • Fubber Nuckin'@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        7 months ago

        Yeah, it can speed up the process but you still have to know how to do it yourself when it inevitably messes something up.

    • rwhitisissle@lemmy.ml
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      7 months ago

      An unpopular opinion, I am sure, but if you’re a beginner with something - a new language, a new framework - and hate reading the docs, it’s a great way of just jumping into a new project. Like, I’ve been hacking away on a django web server for a personal project and it saved me a huge amount of time with understanding how apps are structured, how to interact with its settings, registering urls, creating views, the general development lifecycle of the project and the basic commands I need to do what I’m trying to do. God knows Google is a shitshow now and while Stackoverflow is fine and dandy (when it isn’t remarkably toxic and judgmental), the fact is that it cuts down on hours of fruitless research, assuming you’re not asking it to do anything genuinely novel or hyper-specific.

    • Wes_Dev@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      4
      ·
      7 months ago

      It helps a complete newbie like me get started and even learn while I do. Due to its restrictions and shortcoming, I’ve been having to learn how to structure and plan a project more carefully and thoughtfully, even creating design specs for programs and individual functions, all in order to provide useful prompts for ChatGPT to act on. I learn best by trial and error, with the ability to ask why things happened or are the way they are.

      So, as a secondary teaching assistant, I think it’s very useful. But trying to use the API for ChatGPT 4 is…not worth it. I can easily blow through $20 in a few hours. So, I got a day and a half of use out of it before I gave up. :|

  • MajorHavoc@lemmy.world
    link
    fedilink
    arrow-up
    170
    arrow-down
    4
    ·
    edit-2
    7 months ago

    I predict that, within the year, AI will be doing 100% of the development work that isn’t total and utter bullshit pain-in-the-ass complexity, layered on obfuscations, composed of needlessly complex bullshit.

    That’s right, within a year, AI will be doing .001% of programming tasks.

      • starman2112@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        7 months ago

        Legitimately could be a use case

        “Attend this meeting for me. If anyone asks, claim that your camera and microphone aren’t working. After the meeting, condense the important information into one paragraph and email it to me.”

        • otp@sh.itjust.works
          link
          fedilink
          arrow-up
          15
          ·
          7 months ago

          Here is a summary of the most important information from that meeting. Since there were two major topics, I’ve separated them into two paragraphs.

          1. It is a good morning today.
          2. Everyone is thanked for their time. Richard is looking forward to next week’s meeting.

          The rest of the information was deemed irrelevant to you and your position.

          • MajorHavoc@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            7 months ago

            Holy cow! You’ve done it! You could wrap this (static text block) in a web API and sell it.

            Edit: /s, I guess. But that text really is easily an 80% solution for meeting summaries.

      • MajorHavoc@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        7 months ago

        Hell yes! I’ll join the front of the hype train if they can demo an AI fielding questions while a project manager reviews a card wall.

        • otp@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          Y’know… that seems reasonable. I’d place my bet that there’d be something good enough in only a few years. (Text only, I’d bet)

  • hglman@lemmy.ml
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    2
    ·
    7 months ago

    Engineering is about trust. In all other and generally more formalized engineering disciplines, the actual job of an engineer is to provide confidence that something works. Software engineering may employ fewer people because the tools are better and make people much more productive, but until everyone else trusts the computer more, the job will exist.

    If the world trusts AI over engineers then the fact that you don’t have a job will be moot.

    • Rodeo@lemmy.ca
      link
      fedilink
      arrow-up
      11
      arrow-down
      3
      ·
      7 months ago

      People don’t have anywhere near enough knowledge of how things work to make their choices based on trust. People aren’t getting on the subway because they trust the engineers did a good job; they’re doing it because it’s what they can afford and they need to get to work.

      Similarly, people aren’t using Reddit or Adobe or choosing their cars firmware based on trust. People choose what is affordable and convenient.

      • hglman@lemmy.ml
        link
        fedilink
        English
        arrow-up
        15
        ·
        7 months ago

        In civil engineering public works are certified by an engineer; its literally them saying if this fails i am at fault. The public is trusting the engineer to say its safe.

        • Sylvartas@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          Yeah, people may not know that the subway is safe because of engineering practices, but if there was a major malfunction, potentially involving injuries or loss of life, every other day, they would know, and I’m sure they would think twice about using it.

      • zagaberoo@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        What’s being discussed here is the hiring of engineers rather than consumer choices. Hiring an engineer is absolutely an expression of trust. The business trusts that the engineer will be able to concretely realize abstract business goals, and that they will be able to troubleshoot any deviations.

        AI writing code is one thing, but intuitively trusting that an AI will figure out what you want for you and keep things running is a long way off.

      • drathvedro
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        In my hometown there’s two types of public transit: municipal and commercial. I was surprised to learn that a lot of folk, even the younger ones, only travel by former, even though the commercials are a lot faster, frequent and more comfortable. When asked why, the answer is the same: If anything happens on municipal transport - you can sue the transport company and even the city itself. If anything happens on a commercial line - there’s only a migrant driver and “Individual Enterpreneur John Doe” with a few leased buses to his name. Trust definitely plays a factor here, but you’re right that it’s definitely not based on technical knowledge.

    • CanadaPlus@futurology.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      Hmm. I’ve never thought about it that way. It took a long time for engineering to become that way IIRC - in the past anybody could build a bridge. The main obstacle to this, then, is that people might be a bit too risk-tolerant around AI at first. Hopefully this is where it ends up going, though.

    • eskimofry@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      Very interesting point. Probably the most pressing problem then is to find a way for the black box to be formally verified and the role of AI engineers shifts to keeping the CI\CD green.

    • chiliedogg@lemmy.world
      cake
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      7 months ago

      As someone who works on the city side of development review, I can firmly say I’ll trust a puppy alone with my dinner than a Civil Engineer.

  • netburnr@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    2
    ·
    7 months ago

    I just used copilot for the first time. It made me a ton of call to action text and website page text for various service pages inwas creating for a home builder. It was surprisingly useful, of course I modified the output a bit but overall saved me a ton of time.

    • Daxtron2@lemmy.ml
      link
      fedilink
      arrow-up
      31
      arrow-down
      3
      ·
      7 months ago

      Copilot has cut my workload by about 40% freeing me up for personal projects

      • uplusion23
        link
        fedilink
        arrow-up
        34
        arrow-down
        1
        ·
        7 months ago

        Copilot is only dangerous in the hands of people who couldn’t program otherwise. I love it, it’s helped a ton on tedious tasks and really is like a pair programmer

        • Daxtron2@lemmy.ml
          link
          fedilink
          arrow-up
          13
          ·
          7 months ago

          Yeah it’s perfect for if you can distinguish between good and bad generations. Sometimes it tries to run on an empty text file in vscode and it just outputs an ingredients list lol

    • katy ✨@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      10
      ·
      7 months ago

      i get copilot through github education and let me tell you the first time i put out a bunch of code related to one of my entities, i was floored. it’s definitely not there to write your entire app but it saves so much time

    • funkless_eck@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      id argue it’s more work to get chatgpt to suggest a CTA of “Download now” or “Learn more” than it is to type it by hand.

        • MightyGalhupo@lemmy.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          7 months ago

          If it’s on the same device, it would open a page showing her what is in the downloads folder of his user. I think the joke is he might have something embarrassing there, but I wouldn’t know since I only have things there when I’m downloading them and then immediately file them away to some actual hyperspecific folder

          • stebo02@sopuli.xyz
            link
            fedilink
            arrow-up
            9
            ·
            edit-2
            7 months ago

            why would they be on the same device? how can they be on the same device at the same time? also if she gets the full link it would only show her the html page, not the rest of the folder

            • twopi@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              I think it’s both?

              1. Send link to her but it doesn’t work because it’s only available on the local machine
              2. Show the website by first opening the downloads folder then clicking the website

              You can bring your device to people. Most people use laptops now instead of desktops

  • R0cket_M00se@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    2
    ·
    7 months ago

    AI is only as good as the person using it, like literally any other tool in human existence.

    It’s meant to amplify the workload of the professional, not replace them with a layman armed with an LLM.

    • zeze
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      7 months ago

      deleted by creator

      • DudeDudenson@lemmings.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        7 months ago

        What’s most likely to happen is that the idiot on the other side would start complaining to the AI and asking to talk to a manager.

        And maybe he gets the right hallucination and the AI starts behaving as a manager

  • rockSlayer@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    edit-2
    7 months ago

    This AI thing will certainly replace my MD to HTML converter and definitely not misplace my CSS and JS headers

  • Gabu@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    4
    ·
    7 months ago

    On a more serious note, ChatGPT, ironically, does suck at webdev frontend. The one task that pretty much everyone agrees could be done by a monkey (given enough time) is the one it doesn’t understand at all.

    • Strawberry@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      17
      arrow-down
      8
      ·
      7 months ago

      The one task that pretty much everyone agrees could be done by a monkey

      A phrase commonly uttered about web dev by mediocre programmers who spend 99% of the time writing the same copy-paste spring boot mid-tier code

      • duck1e@lemmy.ml
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        7 months ago

        most of the websites are bloated and shit. Webdev is shit upon because they write code that can’t work 4 months without needing a rewrite

        • frezik@midwest.social
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          A good chunk of that has to do with trackers and ads. Things forced on webdevs by management.

          Not that webdevs couldn’t improve anything otherwise; there are certainly optimizations to be had. But pop open the dev network panel on your browser, clear cache, and refresh the page. A lot of the holdup and dancing elements you’ll see are from third party trackers and ads.

        • Strawberry@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          I agree, and in addition to way too many trackers and advertisements clogging up the page, this is also due to the time, effort, and knowledge not being provided to write performant and compliant code, which should be important given the infinite possibilities of client machines. This can be worsened by only having full stack developers who aren’t knowledgeable in web dev (especially CSS) or by sacrificing performance for trendy javascript-bloated design features

          • AVincentInSpace@pawb.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 months ago

            You do of course realize that you just said that the problem with the modern web is that webdev can be and far too often is done by monkeys?

            I agree that there is a vast difference, even from an end user’s perspective, between a good web developer and a bad one, but the fact remains that the bar for calling oneself a web dev is appallingly low and ChatGPT nevertheless fails to clear it

            • Strawberry@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              I suppose you could see it like that, but I’m saying it can’t be done by “monkeys”, and the pervasive notion that it can has led to broken websites across the Internet

              • AVincentInSpace@pawb.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                7 months ago

                I think I see what you mean. Many a very competent backend dev (and many more a kid in their bedroom with zero programming experience) has thought to themselves “how hard can webdev possibly be?” and blindly stumbled through making a website that looks fine on their machine without bothering to understand what the various CSS units do and turning it into an utter monstrosity if you even slightly change the size of the browser window, and the web suffers for it.

                As a primarily backend dev myself who’s tried my hand at web once or twice, I still think that web developers are by far the most pampered in the industry when it comes to development tools (I can change CSS parameters with sliders right in my browser, see the page update in real time, and when I’m done I can just export the modified .css file to disk and upload it directly to my server with zero touchup to make my changes live? Are you KIDDING ME?) but I also think it’s important to treat the practice with the respect it deserves. By that I mean taking the time to learn the languages, read through MDN’s excellent documentation, and take the time to fully understand what each CSS parameter actually does instead of trial-and-erroring your way into something that only works for you. The same thing you’d do if you were learning any new programming language. Once you do that, apart from a few hiccups due to browser inconsistencies (any time Safari would like to stop eating glue I’d appreciate it) and having to come up with something that looks good in portrait, and get past a metric f**k ton of googling and memorizing the minute differences between dozens of very similar parameters, it’s some of the most fun I’ve had as a programmer. I love being able to just go “I want a bunch of circles at the top of my page that bounce up and down in sequence.” “Sure, give me two minutes.” I’d stress about that for days in any other environment. Why didn’t anyone tell me it could BE like this?

    • Kevin@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      7 months ago

      I don’t think it’s very useful at generating good code or answering anything about most libraries, but I’ve found it to be helpful answering specific JS/TS questions.

      The MDN version is also pretty great too. I’ve never done a Firefox extension before and MDN Plus was surprisingly helpful at explaining the limitations on mobile. Only downside is it’s limited to 5 free prompts/day.

    • httpjames@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      7 months ago

      GPT 4 Turbo is actually much better than GPT 3.5 and 4 for coding. It has a way better understanding of design now.

  • 30p87@feddit.de
    link
    fedilink
    arrow-up
    15
    arrow-down
    3
    ·
    7 months ago

    The only thing ChatGPT etc. is useful for, in every language, is to get ideas on how to solve a problem, in an area you don’t know anything about.

    ChatGPT, how can I do xy in C++?
    You can use the library ab, like …

    That’s where I usually search for the library and check the docs if it’s actually possible to do it this way. And often, it’s not.

    • AeroLemming
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      Yeah, it’s amazing at showing you the idiomatic way to do really specific, narrow-scoped things in a language you’re not familiar with… except for when it’s wrong.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      It’s good at refactoring smaller bits of code. The longer the input, the more likely it is to make errors (and you should prefer to start a new chat than continue a long chat for the same reason). It’s also pretty good at translating code to other languages (e.g. MySQL->PG, Python->C#), reading OpenAPI json definitions and creating model classes to match, and stuff like that.

      Basically, it’s pretty good when it doesn’t have to generate stuff that requires creating complex logic. If you ask it about tasks, languages, and libraries that it has likely trained a lot on (i.e. the most popular stuff in FOSS software and example repos), it doesn’t hallucinate libraries too much. And, GPT4 is a lot better than GPT3.5 at coding tasks. GPT3.5 is pretty bad. GPT4 is a bit better to Copilot as well.

      • Stumblinbear@pawb.social
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        I’ve found it great for tracking down specific things in libraries and databases I’m not terribly familiar with when I don’t know the exact term for them