These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the “we” who have to adapt here?

AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.

“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.

“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)

Me about the article:

I’m feeling that same underwhelming “is this it” bewilderment again.

Me about the video:

Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.

  • 200fifty@awful.systems
    link
    fedilink
    English
    arrow-up
    47
    ·
    1 year ago

    Well, you know, you don’t want to miss out! You don’t want to miss out, do you? Trust me, everyone else is doing this hot new thing, we promise. So you’d better start using it too, or else you might get left behind. What is it useful for? Well… it could make you more productive. So you better get on board now and, uh, figure out how it’s useful. I won’t tell you how, but trust me, it’s really good. You really should be afraid that you might miss out! Quick, don’t think about it so much! This is too urgent!

    • sparkl_motion@beehaw.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Pretty much this. I work in support services in an industry that can’t really use AI to resolve issues due to the myriad of different deployment types and end user configurations.

      No way in hell will I be out of a job due to AI replacing me.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        your industry isn’t alone in that — just like blockchains, LLMs and generative AI are a solution in search of a problem. and like with cryptocurrencies, there’s a ton of grifters with a lot of money riding on you not noticing that the tech isn’t actually good for anything

        • TehPers@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          Unlike blockchains, LLMs have practical uses (GH copilot, for example, and some RAG usecases like summarizing aggregated search results). Unfortunately, everyone and their mother seems to think it can solve every problem they have, and it doesn’t help when suits in companies want to use LLMs just to market that they use them.

          Generally speaking, they are a solution in search of a problem though.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            14
            ·
            1 year ago

            GH copilot, for example, and some RAG usecases like summarizing aggregated search results

            you have no idea how many engineering meetings I’ve had go off the rails entirely because my coworkers couldn’t stop pasting obviously wrong shit from copilot, ChatGPT, or Bing straight into prod (including a bunch of rounds of re-prompting once someone realized the bullshit the model suggested didn’t work)

            I also have no idea how many, thanks to alcohol

            • Steve@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              1 year ago

              Haha they are, in fact, solutions that solve potential problems. They aren’t searching for problems but they are searching for people to believe that the problems they solve are going to happen if they don’t use AI.

            • TehPers@beehaw.org
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              1 year ago

              That sounds miserable tbh. I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same). If your engineers are just pasting whatever BS comes out of the LLM into their code, maybe they need a serious talking to about replacing them with the LLM if they can’t contribute anything meaningful beyond that.

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                9
                ·
                1 year ago

                as much as I’d like to have a serious talk with about 95% of my industry right now, I usually prefer to rant about fascist billionaire assholes like altman, thiel, and musk who’ve poured a shit ton of money and resources into the marketing and falsified research that made my coworkers think pasting LLM output into prod was a good idea

                I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same).

                it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

                • 200fifty@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  edit-2
                  1 year ago

                  it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

                  I was gonna say… good old qa....q 20@a does the job just fine thanks :p

                • TehPers@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  2
                  ·
                  1 year ago

                  Yes, the marketing of LLMs is problematic, but it doesn’t help that they’re extremely demoable to audiences who don’t know enough about data science to realize how unfeasable it is to have a service be inaccurate as often as LLMs are. Show a cool LLM demo to a C-suite and chances are they’ll want to make a product out of it, regardless of the fact you’re only getting acceptable results 50% of the time.

                  it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

                  I’m perfectly fine with vscode, and I know enough vim to make quick changes, save, and quit when git opens it from time to time. It also has multi-cursor support which helps when editing multiple lines in the same way, but not when there are significant differences between those lines but they follow a similar pattern. Copilot can usually predict what the line should be given enough surrounding context.

                • TehPers@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  edit-2
                  1 year ago

                  It’s not that uncommon when filling an array with data or populating a YAML/JSON by hand. It can even be helpful when populating something like a Docker Compose config, which I use occasionally to spin up local services while debugging like DBs and such.

  • Steve@awful.systems
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    1 year ago

    “learn AI now” is interesting in how much it is like the crypto “build it on chain” and how they are both different from something like “learn how to make a website”.

    Learning AI and Building on chain start with deciding which product you’re going to base your learning/building on and which products you’re going to learn to achieve that. Something that has no stability and never will. It’s like saying “learn how to paint” because in the future everyone will be painting. It doesn’t matter if you choose painting pictures on a canvas or painting walls in houses or painting cars, that’s a choice left up to you.

    “Learn how to make a website” can only be done on the web and, in the olden days, only with HTML.

    “Learn AI now”, just like “build it on chain” is nothing but PR to make products seem like legitimised technologies.

    Fuckaduck, ai is the ultimate repulseware

    • Steve@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I wanna expand on this a bit because it was a rush job.

      This part…

      Learning AI and Building on chain start with deciding which product you’re going to base your learning/building on and which products you’re going to learn to achieve that. Something that has no stability and never will.

      …is a bit wrong. The AI environment has no stability now because it’s a mess of products fighting for sensationalist attention. But if it ever gains stability, as in there being a single starting point for learning AI, it will be because a product, or a brand, won. You’ll be learning a product just like people learned Flash.

      Seeing people in here talk about CoPilot or ChatGPT and examples of how they have found it useful is exactly why we’re going to find ourselves in a situation where software products discourage any kind of unconventional or experimental ways of doing things. Coding isn’t a clean separation between mundane, repetitive, pattern-based, automatable tasks and R&D style, hacking, or inventiveness. It’s a recipe for applying the “wordpress theme” problem to everything where the stuff you like to do, where your creativity drives you, becomes a living hell. Like trying to customise a wordpress theme to do something it wasn’t designed to do.

      The stories of chatgpt helping you out of a bind are the exact stories that companies like openAI will let you tell to advertise for them, but they’ll never go all in on making their product really good at those things because then you’ll be able to point at them and say “ahah! it can’t do this stuff!”

        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 year ago

          It’s my own name I made up from a period in the late 2000s, early 2010s when I’d have a lot of freelance clients ask me to build their site “but it’s easy because I have already purchased an awesome theme, I just need you to customise it a bit”

          It’s the same as our current world of design systems and component libraries. They get you 95% of the way and assume that you just fill in the 5% with your own variations and customisations. But what really happens is you have 95% worth of obstruction from making what would normally be the most basic CSS adjustment.

          It’s really hard to explain to someone that it’d be cheaper and faster if they gave me designs and I built a theme from scratch than it would be to panel-beat their pre-built theme into the site they want.

          “customise” is the biggest lie in dev ever told

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 year ago

            I’d have a lot of freelance clients ask me to build their site “but it’s easy because I have already purchased an awesome theme, I just need you to customise it a bit”

            oh my god, this was all of my clients when I was in college

            • Steve@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 year ago

              holy shit, Airtable - the 4th app in my growing list of “UX is the product” apps that will definitely all be absorbed into one of the other apps on the list at some point. (Notion, Slack, Figma)

              they sell flexibility, not speciality! It’s exactly what my rant about AI products is based on.

              Here’s a quick collage of the 4 product taglines. Not a concrete purpose in sight. They know you can’t call them up and say “hey, I paid good money for your product and it isn’t doing productivities!”

              1: The fastest way to build apps. Empower your team to work faster and more confidently than ever before. 2: Made for people. Built for productivity. Connect the right people, find anything you need and automate the rest. That's work in Slack, your productivity platform. 3: Work together to build the best products. Explore design possibilities, build prototypes, and easily translate your work into code with Figma—a collaborative product development platform for teams. 4: Your wiki, docs,  e projects. Together. Notion is the connected workspace where better, faster work happens. Now with AI

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                9
                ·
                1 year ago

                The fastest way to build apps

                this is an ad for a self-destructive work/life balance and a paycheck that’s high enough you can just barely afford to patch yourself up when it catches up with you?

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            ah. yeah. I know what you mean.

            I have a set of thoughts on a related problem in this (which I believe I’ve mentioned here before (and, yes, still need to get to writing)).

            the dynamics of precision and loss, in communication, over time, socially, end up resulting in some really funky setups that are, well, mutually surprising to most/all parties involved pretty much all of the time

            and the further down the chain of loss of precision you go, well, godspeed soldier

            • Steve@awful.systems
              link
              fedilink
              English
              arrow-up
              9
              ·
              1 year ago

              Also, like, when you simplify the complicated parts of something, what happens to the parts of that thing that were already simple? They don’t get more simple, usually they become more complex, or not possible at all anymore.

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                8
                ·
                1 year ago

                one of the things I love ranting about, and teaching (yes, seriously), to people, is that

                simple != simplicity

                it’s a nuanced little distinction. but it’s also a distinction with worlds of variances.

                and there are so, so, so, so, so, so many people who think the former is the goal

                it’s a fucking scourge

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                ·
                1 year ago

                once you learn that I’m, at most, touristly familiar with lisp

                you learn that the inside of my mind (and how it contextualises) is a deeply scary place

        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I’ve been watching the 5 hours of tobacco advertising hearings from the 90s in a floating window while working on code spaghetti vue js components all day.

            • Steve@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              seriously, every minute of these hearings is fascinating. Just some of the most evil, greedy, slimy shit coming out of the mouths of suited up old white men who are trying every single misdirection possible to justify targeted marketing of tobacco

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                (~stream of consciousness commentary because spoon deficit:)

                I’ve seen samples of it used in some media before

                I haven’t ever gotten to watch it myself

                probably there’s value in viewing and analyzing it in depth, because… a lot of other bad actors (involved in current-day bad) pull pretty much the “same sort of shit”

                the legal methodology and wordwrangling and dodging may have evolved (<- speculation/guess)

                but near certainly there’s a continuum

                • Steve@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  1 year ago

                  If you feel like it :)

                  https://archive.org/details/tobacco_pxv27a00

                  I’ve lost my link to part 2 somehow…

                  I would say that the modern techniques are not as modern as I thought. I’m seeing plenty of similarities to crypto whataboutisms and ai charlatans claiming to care about the common person.

                  Not sure if this’ll work - but here’s a clip I posted on masto of a guy basically saying tobacco companies should be able to advertise because advertising is a fight for market share, not for increasing the market https://hci.social/@fasterandworse/111142173296522921

              • maol@awful.systems
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                People forget just how evil the tobacco companies were. A factor in why I don’t smoke is that I just don’t want people like this to earn money.

                • Steve@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  the hearing is just for regulations on their advertising practices too. One of the most common complaints from the lobbyists was “if you want to do this you should go all the way and outlaw smoking completely” as if a marlboro logo on an f1 car was keeping the industry alive.

          • Steve@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            Thanks! It’s not really unhinged, just written in an unhinged manner I think. Trying to make too many points in a small space

        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          I meant that anecdotes of these things being helpful usually present mundane, repetitive coding tasks as being separate from the supposed good parts of development, not intertwined with them. I liken that to the value proposition of frameworks, customisable themes, design systems, or component libraries. They are fine until you want to go off-script, where having deep knowledge of the underlying system becomes a burden because you are obstructed by the imposed framework.

    • deur@feddit.nl
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      What’s worse is these people who shill AI and genuinely are convinced Chat GPT and stuff are going to take over the world will not feel an ounce of shame once AI dies just like the last fad.

      If I was wrong about AI being completely useless and how its not going to take over the world, I’d feel ashamed at my own ignorance.

      Good thing I’m right.

    • Christopher Wood@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I haven’t paid that much attention to the software and platforms behind all this. Now that you mention it, yes, they are all products not underlying technologies. A bit like if somebody was a Zeus web server admin versus AOL web server admin without anybody being just a web server admin. Or like if somebody had to choose between Windows or Solaris without just considering operating systems.

      Then again, what with all the compute and storage and ongoing development needed I’m not convinced that AI currently can be a gratis (free as in beer) thing in the same way that they just hand out web servers.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Bingo. “Learn AI” is an even more patronizing and repellent version of “learn to code”, which was already not much of a solution to changes in the jobs market.

      • Steve@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        good point. “learn to code” is such an optimistically presented message of pessimism. It’s like those youtube remixes people would do of comedy movie trailers as horror movies. “learn to code” like “software is eating the world” works so much better as a claustrophobic, oppressive, assertion.

        • maol@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          The blasé spite with which some people would say “just learn to code” was a precursor to the glee with which these arrogant bozos are predicting that commercial AI generators will ruin the careers of artists, journalists, filmmakers, authors, who they seem to hate.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            and as we’ve seen in this thread, they don’t mind if it ruins the career of every junior dev who’s not onboard either. these bloodthirsty assholes want everyone they consider beneath them to not have gainful employment

            • maol@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 year ago

              their apparently sincere belief that not being in poverty is a privilege that people should have to earn —by doing the right kind of job, and working the right kind of way, and having the right kind of politics, is genuinely very strange and dark. The worst of vicious “stay poor” culture.

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                in spite of what they claim, most tech folk are extremely conservative. that’s why it’s so easy for some of them to drop the pretense of being an ally when it becomes inconvenient, or when there’s profit in adopting monstrous beliefs (and there often is)

                • maol@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  The politics of silicon valley is a fascinating and broad topic in & of itself that could make a good thread here or in sneerclub

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      *chugga* *chugga* *chugga**choo* *chooooooo…*

      There goes another hype train…

    • Dkarma@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      I think you’re missing the forest for the trees here. Learning about AI is great advice. Being able to convey that you can understand and speak to a complex topic like AI shows intelligence.
      I get what you’re saying wrt block chain but the applications are night and day in terms of usability and value to the common company or consumer.

      Every aspect of business will be affected by ai. That’s a fact. Blockchain not so much.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        you’re on an instance for folks who’ve already learned about AI and, through intensive research, have found it to be goofy as fuck grift tech designed and marketed by assholes

        • Dkarma@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          I work with AI so it’s not a grift. The asshole part is right tho.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            why would you working in a field make it not a grift? all of the reformed cryptocurrency devs I know maintain that they didn’t know it was a grift until it was far too late (even as we told them it was in no uncertain terms). both industries seem to have the same hostility towards skeptics and constant kayfabe, and the assholes at the top are very experienced at creating systems that punish dissent.

            of course I’m wasting my time explaining this — your continued paycheck and health insurance rely on you rejecting the idea that your career field produces fraudulently marketed software and garbage research. the only way that ends is if you see something bad enough you can’t reason past it, or if the money starts to show signs of running out. it’s almost certainly gonna be the latter — the fucking genius part about targeting programmers for this kind of affinity fraud is most of them have flexible enough ethics that they’ll gladly pump out shitheaded broken software that’s guaranteed to fuck up the earth and/or get folks killed if there’s quick profit in it

      • Steve@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Every aspect of business will be affected by ai. That’s a fact.

        never say “that’s a fact” about a product prediction.

        the relevance of usability/ux of a thing is in inverse proportion to the value the thing creates. If it created value, usability/ux would only exist as a topic for marketing one product against another.

        any industry that emphasises usability/ux as a feature is on a spectrum somewhere between problemless solutions and flooded markets.

        also, re: “I work with AI so it’s not a grift.”

        if your employer has a mission statement that is anything other than “make as much money as possible” then they are more likely to be a grift than a company whose mission statement is “make as much money as possible”

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 year ago

    You now have the opportunity to become a Prompt Engineer

    No way man I heard the AIs were coming for those jobs. Instead I’m gonna become a prompt writing prompt writer who writes prompts to gently encourage AIs to themselves write prompts to put J.M. out of a job. Checkmate.

  • cobwoms@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    i’d be fine with losing my job. i hate working, let a computer do it.

    only problem is my salary, which i cannot live without

  • MajorHavoc@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago

    “Experts were quick to clarify that this only applies to the very few people who still have jobs - namely those who followed experts’ previous warnings and learned programming, started a social media account, adapted to the new virtual reality corporate world, and invested in crytpo before the dollar crashed.”

    Edit: And invested in a smart home and created a personal website.

  • Steve@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    The great* Jakob Nielsen is all in on AI too btw. https://www.uxtigers.com/post/ux-angst

    I expect the AI-driven UX boom to happen in 2025, but I could be wrong on the specific year, as per Saffo’s Law. If AI-UX does happen in 2025, we’ll suffer a stifling lack of UX professionals with two years of experience designing and researching AI-driven user interfaces. (The only way to have two years of experience in 2025 is to start in 2023, but there is almost no professional user research and UX design done with current AI systems.) Two years is the bare minimum to develop an understanding of the new design patterns and user behaviors that we see in the few publicly available usability studies that have been done. (A few more have probably been done at places like Microsoft and Google, but they aren’t talking, preferring to keep the competitive edge to themselves.)

    *sarcasm

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    Ugh, fuck this punditry. Luckily, many of the views in this article are quickly dispatched through media literacy. I hate that, for the foreseeable future, AI will be the boogeyman whispered about in all media circles. But knowing that it is a boogeyman makes it very easy to tell when it’s general sensationalist hype/drivel for selling papers vs. legitimate concerns about threats to human livelihoods. In this case, it’s more the former.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Isn’t it great how they aren’t saying how to “learn” or “accept” AI? They aren’t saying: “learn what a neural network is” or anything close to that. It’s not even: “Understand what AI does and its output and what that could be good or bad for”. They’re just saying, “Learn how to write AI prompts. No, I don’t care if it’s not relevant or useful, and it’s your fault if you can’t leverage that into job security.” They’re also saying: “be prepared to uproot your entire career in case your CEO tries to replace you, and be prepared to change careers completely. When the AI companies we run replace you, it’s not our fault because we warned you.” It’s so fucking sad that these people are allowed to have opinions. Also this:

      For people like Deschatelets, it doesn’t feel that straightforward.

      “There’s nothing to adapt to. To me, writing in three to four prompts to make an image is nothing. There’s nothing to learn. It’s too easy,” he said.

      His argument is the current technology can’t help him — he only sees it being used to replace him. He finds AI programs that can prompt engineered images, for example, useful when looking for inspiration, but aside from that, it’s not much use.

      “It’s almost treating art as if it’s a problem. The only problem that we’re having is because of greedy CEOs [of Hollywood studios or publishing houses] who make millions and millions of dollars, but they want to make more money, so they’ll cut the artists completely. That’s the problem,” he said.

      A king. This should be the whole article.

  • SubArcticTundra@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Isn’t this just the latest fad? Wasn’t it the same 10 years ago except that instead of AI it was getting social media, or having a website, or smart homes?

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      nah, sometimes smartphones and having a website (this one, in fact) are useful

      social media can fuck right off though

    • Lauchs@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      1 year ago

      For the most part, no.

      Smartphones could not do many jobs. Some people made a lot of money working in smartphone tech (apps etc) but this is a fundamentally different paradigm.

      That being said,

      having a website

      How many successful businesses don’t have a website nowadays?

      To use my work as an example, I work in a standard IT unit for a large organization. Right now, people send our team all sorts of requests, easier ones get handled by new coders. However, AI will likely be able to do many of those same tasks faster and much cheaper than those junior devs. Someone (I’m hoping me) will get a raise and presumably, implement, train and run that AI.

      Junior coders who don’t know how to implement it are about to get screwed. And on the other end of the spectrum, senior coders who made a living by being good at very niche knowledge are about to have their exclusive knowledge exploded by AI.

      I’m not actually sure learning AI will help much but what else can we do?

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        senior coders who made a living by being good at very niche knowledge are about to have their exclusive knowledge exploded by AI.

        That sounds like precisely the opposite of what will happen, because LLMs are not competent at important detail.

        • Zed Lopez@wandering.shop
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          @dgerard I do have some anxiety here, though: I know plenty of managers who’d look at the possibility and decide that they’re geniuses who have figured out a bold, brilliant plan to cut costs and have a great next quarter. Never mind every person with a technical clue saying it’s a irresponsibly bad idea – those naysayers are just focused on problems, not solutions.

          It’ll take enormous losses, outages, and data leaks to have a chance of getting through to them…

          • gerikson@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            That’s just creative destruction. Plenty of companies in the past have taken big bets on fads and failed, and yet, capitalism has not collapsed and keeps on exploiting workers and the planet.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          Well, a senior coder is somebody with maybe 5 years experience, tops.

          The only way I can see what is at the moment called AI even just touch things like systems design, requirements analysis, technical analysis, technical architecture design and software development process creation/adaption, is by transforming the clear lists of points which are the result of such processes into the kind of fluff-heavy thick documents that managerial types find familiar and measure (by thickness) as work.

        • Lauchs@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          I mean that it is incredibly easy to ask an LLM how to do something in a language with which you are unfamiliar. So if you’ve made a living by being the guy who knows whatever semi obscure language, things are about to change.

          • gerikson@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            1 year ago

            How does an LLM “know” a language? By ingesting a huge amount of text and source code around the language. A semi-obscure language, by definition, does not have a huge amount of text and source code associated with it.

            Similarly, people who speculate that their processes can be replaced by an LLM pre-suppose that those processes are clearly and unambiguously documented. The fact that there are humans still in the loop means they are not. So you can either make the huge effort of documenting them, then try to train an LLM, or you can just use a boring old language to automate them directly.

          • zogwarg@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            1 year ago

            That’s the dangerous part:

            • The LLM being just about convincing enough
            • The language being unfamiliar

            You have no way of judging how correct or how wrong the output is, and no one to hold responsible or be a guarantor.

            With the recent release of the heygen drag-drop tool for video translating, and lip-syncing tool, I saw enough people say: “Look isn’t it amazing, I can speak Italian now”

            No, something makes look like you can, and you have no way of judging how convincing the illusion is. Even if the output is convincing/bluffing to a native speaker, you still can’t immediately check that the translation is correct. And again no one to hold accountable.

            • Lauchs@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              1 year ago

              I am talking about coding languages. There are many ways to verify that your solutions are correct.

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                11
                ·
                edit-2
                1 year ago

                We are over half a century into programming computers, and the industry still fights itself over basic implementations of testing and using that in-process with development.

                The very nature of software correctness is a fuzzy problem (because defining the problem from requirements to code also often goes awry with imprecise specification).

                Just because there exists some tooling or options doesn’t mean it’s solved

                And then people like you have/argue the magical thinking belief that slapping LLMs on top of all this shit will tooooooootally work

                I look forward to charging you money to help you fix your mess later.

                • Steve@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  1 year ago

                  Genuine Q: Do you think we’ll start to see llm-friendly languages emerge? Languages that consider the “llm experience” that fools like this will welcome. Or even a reversion back to low-level languages

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                7
                ·
                1 year ago

                not if you don’t know the language, and not in any generalized way thanks to the halting problem

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            LLMs are godawful at obscure languages. not sure how many devs working on non-legacy projects are “the guy who knows whatever semi obscure language” though given how focused the industry is on choosing tech stacks based on dev availability. so I guess your threat is directed towards the legacy projects I’m not doing, or the open source shit I’m doing on my own time in the obscure languages I prefer? cause if there’s one thing I need in my off time it’s a torrent of garbage, unreviewable PRs

              • Lauchs@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                That’s well put!

                I keep thinking/worrying in terms of how I use chatgpt vs what people think chatgpt can accomplish on its own.

                To me, I feel like I’ve been given a supercharger and can handle way more than before by easily double checking syntax of better functions. But if people are relying on chatgpt to code chunks for them, god help them.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        I wouldn’t be so confident in replacing junior devs with “AI”:

        1. Even if it did work without wasting time, it’s unsustainable since junior devs need to acquire these skills, senior devs aren’t born from the void, and will eventually graduate/retire.
        2. A junior dev willing to engage their brain, would still iterate through to the correct implementation for cheaper (and potentially faster), than senior devs needing spend time reviewing bullshit implementations, and at arcane attempts of unreliable “AI”-prompting.

        It’s copy-pasting from stack-overflow all over again. The main consequence I see for LLM based coding assistants, is a new source of potential flaws to watch out for when doing code reviews.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 year ago

          It’s worse that “copy-pasting from stack-overflow” because the LLM actually loses all the answer trustworthiness context (i.e. counts and ratios of upvotes and downvotes, other people’s comments).

          That thing is trying to find the text tokens of answer text nearest to the text tokens of your prompt question in its text token distribution n-dimensional space (I know it sound weird, but its roughly how NNs work) and maybe you’re lucky and the highest probability combination of text-tokens was right there in the n-dimensional space “near” your prompt quest text-tokens (in which case straight googling it would probably have worked) or maybe you’re not luck and it’s picking up probabilistically close chains of text-tokens which are not logically related and maybe your’re really unlucky and your prompt question text tokens are in a sparcelly populated zone of the n-dimensional text space and you’re getting back something starting and a barelly related close cluster.

          But that’s not even the biggest problem.

          The biggest problem is that there is no real error margin output - the thing will give you the most genuine, professional-looking piece of output just as likely for what might be a very highly correlated chain of text-tokens as for what is just an association of text tokens which is has a low relation with your prompt question text-token.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Isn’t the lack of junior positions already a problem in a few parts of the tech industry? Due to the pressures of capitalism (drink!) I’m not sure it will be as easy as this.

          • zogwarg@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            1 year ago

            I said I wouldn’t be confident about it, not that enshitification would not occur ^^.

            I oscillate between optimisim and pessimism frequently, and for sure some many companies will make bad doo doo decisions. Ultimately trying to learn the grift is not the answer for me though, I’d rather work for some company with at least some practical sense and pretense at an attempt of some form of sustainability.

            The mood comes, please forgive the following, indulgent, poem:
            Worse before better
            Yet comes the AI winter
            Ousting the fever

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            The outsourcing trend wasn’t good for junior devs in the West, mainly in english-speaking countries (except India, it was great there for them).

        • wagesj45@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          5
          ·
          1 year ago

          who don’t know how to implement it

          He didn’t say anything about replacing them. Certain tedious aspects that get farmed out to junior devs the AI will certainly be able to do, especially under supervision of a developer. Junior devs that refuse to learn how to use and implement the AI probably will get left behind.

          AI won’t replace anyone for a long time (probably). What it will do is bring about a new paradigm on how we work, and people who don’t get on board will be left behind, like all the boomers that refuse to learn how to open PDF files, except it’ll happen much quicker than the analogue-to-digital transition did and the people effected will be younger.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        However, AI will likely be able to do many of those same tasks faster and much cheaper than those junior devs.

        I work in support too, and predict a long and profitable career cleaning up the messes the AI will create.

        • sinedpick@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          Nah bro, when GPT-5 comes out all code it’ll write will exactly match the specification, and it’ll also sim the entire universe to guess your mental state and correct any mistakes you made in your specs.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            The singularity happens. We invent the basilisk. But, oops, the alignment we ended up with is the frustrations of hundreds of thousands of derailed projects, and poor ‘ole basi just gets to write corpware forever

            Conway’s law strikes again!