These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the “we” who have to adapt here?

AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.

“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.

“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)

Me about the article:

I’m feeling that same underwhelming “is this it” bewilderment again.

Me about the video:

Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.

  • sparkl_motion@beehaw.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Pretty much this. I work in support services in an industry that can’t really use AI to resolve issues due to the myriad of different deployment types and end user configurations.

    No way in hell will I be out of a job due to AI replacing me.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      your industry isn’t alone in that — just like blockchains, LLMs and generative AI are a solution in search of a problem. and like with cryptocurrencies, there’s a ton of grifters with a lot of money riding on you not noticing that the tech isn’t actually good for anything

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Unlike blockchains, LLMs have practical uses (GH copilot, for example, and some RAG usecases like summarizing aggregated search results). Unfortunately, everyone and their mother seems to think it can solve every problem they have, and it doesn’t help when suits in companies want to use LLMs just to market that they use them.

        Generally speaking, they are a solution in search of a problem though.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          14
          ·
          1 year ago

          GH copilot, for example, and some RAG usecases like summarizing aggregated search results

          you have no idea how many engineering meetings I’ve had go off the rails entirely because my coworkers couldn’t stop pasting obviously wrong shit from copilot, ChatGPT, or Bing straight into prod (including a bunch of rounds of re-prompting once someone realized the bullshit the model suggested didn’t work)

          I also have no idea how many, thanks to alcohol

          • Steve@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            Haha they are, in fact, solutions that solve potential problems. They aren’t searching for problems but they are searching for people to believe that the problems they solve are going to happen if they don’t use AI.

          • TehPers@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            1 year ago

            That sounds miserable tbh. I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same). If your engineers are just pasting whatever BS comes out of the LLM into their code, maybe they need a serious talking to about replacing them with the LLM if they can’t contribute anything meaningful beyond that.

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              9
              ·
              1 year ago

              as much as I’d like to have a serious talk with about 95% of my industry right now, I usually prefer to rant about fascist billionaire assholes like altman, thiel, and musk who’ve poured a shit ton of money and resources into the marketing and falsified research that made my coworkers think pasting LLM output into prod was a good idea

              I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same).

              it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

              • 200fifty@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                ·
                edit-2
                1 year ago

                it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

                I was gonna say… good old qa....q 20@a does the job just fine thanks :p

                • self@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  1 year ago

                  “but my special boy text editing task surely needs more than a basic macro” that’s why Bram Moolenaar, Dan Murphy, and a bunch of grad students Stallman didn’t credit gave us Turing-complete editing languages

              • TehPers@beehaw.org
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                1 year ago

                Yes, the marketing of LLMs is problematic, but it doesn’t help that they’re extremely demoable to audiences who don’t know enough about data science to realize how unfeasable it is to have a service be inaccurate as often as LLMs are. Show a cool LLM demo to a C-suite and chances are they’ll want to make a product out of it, regardless of the fact you’re only getting acceptable results 50% of the time.

                it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

                I’m perfectly fine with vscode, and I know enough vim to make quick changes, save, and quit when git opens it from time to time. It also has multi-cursor support which helps when editing multiple lines in the same way, but not when there are significant differences between those lines but they follow a similar pattern. Copilot can usually predict what the line should be given enough surrounding context.

              • TehPers@beehaw.org
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                edit-2
                1 year ago

                It’s not that uncommon when filling an array with data or populating a YAML/JSON by hand. It can even be helpful when populating something like a Docker Compose config, which I use occasionally to spin up local services while debugging like DBs and such.

                  • TehPers@beehaw.org
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    edit-2
                    1 year ago

                    Copilot helped me a lot when filling in legendaryII.json based on data from legendary.json in this directory. The data between the two files is similar, but there are slight differences in the item names and flag names. Most of it was copy/paste, but filling in the When sections was much easier for me with copilot + verify, for example.

                    Edit: It also helped me with filling in the entries at the top of this C# file based on context I provided in a different format above (temporarily) in comments.