Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      I realize that these are going to have been utter fucking idiots to start with, but

      no air channeling, no structures heatsinks, wide-open roof that could cause further air movement, mismatched fan sizes, extremely insufficient extractors

      -15/10, would watch it burn again

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 days ago

    I just passed a bus stop ad (in Germany) of Perplexity AI that said you can ask it about the chances of Germany winning Euro2024.

    So I guess it’s now a literal oracle or something?? What happened to the good-old “dog picking a food bowl” method of deciding championships.

  • David Gerard@awful.systems
    cake
    OPM
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    5 days ago

    How do you deal with ADHD overload? Everyone knows that one: you PILE MORE SHIT ON TOP

    https://pivot-to-ai.com - new site from Amy Castor and me, coming soon!

    there’s nothing there yet, but we’re thinking just short posts about funny dumb AI bullshit. Web 3 Is Going Great, but it’s AI.

    i assure you that we will absolutely pillage techtakes, but will have to write it in non-jargonised form for ordinary civilian sneers

    BIG QUESTION: what’s a good WordPress theme? For a W3iGG style site with short posts and maybe occasional longer ones. Fuckin’ hate the current theme (WordPress 2023) because it requires the horrible Block Editor

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 days ago

      How do you deal with ADHD overload? Everyone knows that one: you PILE MORE SHIT ON TOP

      how dare you simulate my behavior to this degree of accuracy

      but seriously I’m excited as fuck for this! I’ve been hoping you and Amy would take this on forever, and it’s finally happening!

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        4 days ago

        how dare you simulate my behavior to this degree of accuracy

        @AcausalRobotGod frantically taking notes

      • David Gerard@awful.systems
        cake
        OPM
        link
        fedilink
        English
        arrow-up
        9
        ·
        5 days ago

        molly is delighted that people might stop telling her to

        arguably we shoulda done it last year, but better late than never

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          5 days ago

          I wouldn’t even call y’all late; public opinion towards AI is just starting to turn from optimism to mockery, so this feels like the perfect opportunity to normalize sneering in a way that’s easier for folks without context to consume than SneerClub or TechTakes.

          • David Gerard@awful.systems
            cake
            OPM
            link
            fedilink
            English
            arrow-up
            10
            ·
            5 days ago

            when I write the blockchain stuff, it’s like, here’s one paragraph of the actual thing going on, and here’s another thousand words to make it comprehensible

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              4 days ago

              So much yak shaving involved with blockchain.

              Or any of this for that matter, I imagine you have already once answered the question of ‘how are you involved with twitter being bought by Musk’, ‘well in 1995, Scientology …’

  • hrrrngh@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    5 days ago

    People are so, so, so bad at telling what’s a bot and what’s real. I know social media is swarming with bots, but if you’re interacting with somebody who’s saying anything more complicated than “P o o s i e I n B i o” it’s probably not a bot. A similar thing happens in online games, too, and it’s usually the excuse people use before harassing someone else

    But damn the lengths people will go to to avoid admitting they were wrong. This comment chain just keeps going on with somebody who’s convinced {origin="RU"}{faith="bad"}{election_manipulation="very yes"} must be real because something something microservices: https://www.reddit.com/r/interestingasfuck/comments/1dlg8ni/russian_bot_falls_prey_to_a_prompt_iniection/l9pbmrw/ It reads like something straight off /r/programming or the orange site

    Then it comes full circle with people making joke responses on Twitter imitating the first post, and then other people taking those joke responses as proof that the first one must be real: https://old.reddit.com/r/ChatGPT/comments/1dimlyl/twitter_is_already_a_gpt_hellscape/l9691c8/

    This account kind of kicked up some drama too, basically for the same reason (answering an LLM prompt), but it’s about mushroom ID instead: https://www.reddit.com/user/SeriousPerson9 I’ve seen people like this who use voice-to-text and run their train of thought through ChatGPT or something, like one person notorious on /r/gamedev. But people always assume it’s some advanced autonomous bot with stochastic post delays that mimic a human’s active hours when like, it’s usually just somebody copy/pasting prompts and responses.

    Sorry if you contract any diseases from those links or comment chains

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 days ago

      Yeah I thought this one looked very very made up and far too on the nose. I think it’s fine to label human-run spam/troll accounts as bots though.

      Somewhat related I stumbled upon robots flirting with eachother the other day.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      well count me as bamboozled then. stupider, pre-gpt version of this consists of robocalls that are really operated by people, but they don’t speak, they just choose one of several canned responses. why don’t they speak? because most of them are illegal immigrants and it would be obvious by accent

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      5 days ago

      It’s like young-earth creationists who believe that dinosaur bones were installed as-is at the beginning of time time in order to test us, so their existence proves nothing about geology or evolution.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      the first post (and some bits) was featured earlier in the stubsack, if you want to check some other parts

      but holy hell that reddit thread

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 days ago

    Not sure where I got the link to this video from, could very well be from this place, but look ‘Gamers’ made an NFT game without blockchain and cryptocurrencies! And it will potentially lead to other Gamers being scammed! Innovation!

    • Eiim@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      akshually, the tokens are perfectly fungible, my stickernana is totally indistinguishable from the million other stickernanas out there. Not that it matters for the purpose of useless speculative trades.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        6 days ago

        Shit you are right, I should have said an NFT game without the NFTs. I stand corrected! Also would have been funnier.

  • jax@awful.systems
    link
    fedilink
    English
    arrow-up
    24
    ·
    9 days ago

    NYT opinion piece title: Effective Altruism Is Flawed. But What’s the Alternative? (archive.org)

    lmao, what alternatives could possibly exist? have you thought about it, like, at all? no? oh…

    (also, pet peeve, maybe bordering on pedantry, but why would you even frame this as singular alternative? The alternative doesn’t exist, but there are actually many alternatives that have fewer flaws).

    You don’t hear so much about effective altruism now that one of its most famous exponents, Sam Bankman-Fried, was found guilty of stealing $8 billion from customers of his cryptocurrency exchange.

    Lucky souls haven’t found sneerclub yet.

    But if you read this newsletter, you might be the kind of person who can’t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of us…

    rational_economist.webp

    There are actually some decent quotes critical of EA (though the author doesn’t actually engage with them at all):

    The problem is that “E.A. grew up in an environment that doesn’t have much feedback from reality,” Wenar told me.

    Wenar referred me to Kate Barron-Alicante, another skeptic, who runs Capital J Collective, a consultancy on social-change financial strategies, and used to work for Oxfam, the anti-poverty charity, and also has a background in wealth management. She said effective altruism strikes her as “neo-colonial” in the sense that it puts the donors squarely in charge, with recipients required to report to them frequently on the metrics they demand. She said E.A. donors don’t reflect on how the way they made their fortunes in the first place might contribute to the problems they observe.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      ·
      8 days ago

      Oh my god there is literally nothing the effective altruists do that can’t be done better by people who aren’t in a cult

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 days ago

      But if you read this newsletter, you might be the kind of person who can’t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of us…

      Funny how the wannabe LW Rationalists don’t seem read that much Rationalism, as Scott has already mentioned that our views on economists (that they are all looking for the Rational Economic Human Unit) is not up to date and not how economists think anymore. (So in a way it is a false stereotype of economists, wasn’t there something about how Rationalists shouldn’t fall for these things? ;) ).

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    9 days ago

    Found in the wilds^

    Giganto brain AI safety ‘scientist’

    If AIs are conscious right now, we are monsters. Nobody wants to think they’re monsters. Ergo: AIs are definitely not conscious.

    Internet rando:

    If furniture is conscious right now, we are monsters. Nobody wants to think they’re monsters. Ergo: Furniture is definitely not conscious.

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    ·
    9 days ago

    https://xcancel.com/AISafetyMemes/status/1802894899022533034#m

    The same pundits have been saying “deep learning is hitting a wall” for a DECADE. Why do they have ANY credibility left? Wrong, wrong, wrong. Year after year after year. Like all professional pundits, they pound their fist on the table and confidently declare AGI IS DEFINITELY FAR OFF and people breathe a sigh of relief. Because to admit that AGI might be soon is SCARY. Or it should be, because it represents MASSIVE uncertainty. AGI is our final invention. You have to acknowledge the world as we know it will end, for better or worse. Your 20 year plans up in smoke. Learning a language for no reason. Preparing for a career that won’t exist. Raising kids who might just… suddenly die. Because we invited aliens with superior technology we couldn’t control. Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banks’ Culture series as a good outcome… where, again, HUMANS ARE PETS. THIS IS THEIR GOOD OUTCOME. What’s funny, too, is that noted skeptics like Gary Marcus still think there’s a 35% chance of AGI in the next 12 years - that is still HIGH! (Side note: many skeptics are butthurt they wasted their career on the wrong ML paradigm.) Nobody wants to stare in the face the fact that 1) the average AI scientist thinks there is a 1 in 6 chance we’re all about to die, or that 2) most AGI company insiders now think AGI is 2-5 years away. It is insane that this isn’t the only thing on the news right now. So… we stay in our hopium dens, nitpicking The Latest Thing AI Still Can’t Do, missing forests from trees, underreacting to the clear-as-day exponential. Most insiders agree: the alien ships are now visible in the sky, and we don’t know if they’re going to cure cancer or exterminate us. Be brave. Stare AGI in the face.

    This post almost made me crash my self-driving car.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      20
      ·
      9 days ago

      Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banks’ Culture series as a good outcome… where, again, HUMANS ARE PETS. THIS IS THEIR GOOD OUTCOME.

      I am once again begging these e/acc fucking idiots to actually read and engage with the sci-fi books they keep citing

      but who am I kidding? the only way you come up with a take as stupid as “humans are pets in the Culture” is if your only exposure to the books is having GPT summarize them

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      8 days ago

      It’s mad that we have an actual existential crisis in climate change (temperature records broken across the world this year) but these cunts are driving themselves into a frenzy over something that is nowhere near as pressing or dangerous. Oh, people dying of heatstroke isn’t as glamorous? Fuck off

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      9 days ago

      Seriously, could someone gift this dude a subscription to spicyautocompletegirlfriends.ai so he can finally cum?

      One thing that’s crazy: it’s not just skeptics, virtually EVERYONE in AI has a terrible track record - and all in the same OPPOSITE direction from usual! In every other industry, due to the Planning Fallacy etc, people predict things will take 2 years, but they actually take 10 years. In AI, people predict 10 years, then it happens in 2!

      ai_quotes_from_1965.txt

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      8 days ago

      humans are pets

      Actually not what is happening in the books. I get where they are coming form but this requires redefining the word pet in such a way it is a useless word.

      The Culture series really breaks the brains of people who can only think in hierarchies.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      8 days ago

      If you’ve been around the block like I have, you’ve seen reports about people joining cults to await spaceships, people preaching that the world is about to end &c. It’s a staple trope in old New Yorker cartoons, where a bearded dude walks around with a billboard saying “The End is nigh”.

      The tech world is growing up, and a new internet-native generation has taken over. But everyone is still human, and the same pattern-matching that leads a 19th century Christian to discern when the world is going to end by reading Revelation will lead a 25 year old tech bro steeped in “rationalism” to decide that spicy autocomplete is the first stage of The End of the Human Race. The only difference is the inputs.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      10 days ago

      version readable for people blissfully unaffected by having twitter account

      “Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.”

      yeah ez just lemme build dc worth 1% of global gdp and run exclusively wisdom woodchipper on this

      “Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might.”

      power grid equipment manufacture always had long lead times, and now, there’s a country in eastern europe that has something like 9GW of generating capacity knocked out, you big dumb bitch, maybe that has some relation to all packaged substations disappearing

      They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

      i see that besides 50s aesthetics they like mccarthyism

      “As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. “

      how cute, they think that their startup gets nationalized before it dies from terminal hype starvation

      “I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

      “We don’t need to automate everything—just AI research”

      “Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “

      just needs tiny increase of six orders of magnitude, pinky swear, and it’ll all work out

      it weakly reminds me how Edward Teller got an idea of a primitive thermonuclear weapon, then some of his subordinates ran numbers and decided that it will never work. his solution? Just Make It Bigger, it has to be working at some point (it was deemed as unfeasible and tossed in trashcan of history where it belongs. nobody needs gigaton range nukes, even if his scheme worked). he was very salty that somebody else (Stanisław Ulam) figured it out in a practical way

      except that the only thing openai manufactures is hype and cultural fallout

      “We’d be able to run millions of copies (and soon at 10x+ human speed) of the automated AI researchers.” “…given inference fleets in 2027, we should be able to generate an entire internet’s worth of tokens, every single day.”

      what’s “model collapse”

      “What does it feel like to stand here?”

      beyond parody

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        19
        ·
        11 days ago

        “Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “

        Also this doesn’t give enough credit to gradeschoolers. I certainly don’t think I am much smarter (if at all) than when I was a kid. Don’t these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? I’m not sure if it’s me being the weird one, to me growing up is not about becoming smarter, it’s more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.

        • Mii@awful.systems
          link
          fedilink
          English
          arrow-up
          18
          ·
          edit-2
          11 days ago

          Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems?

          Yes. They literally think that. I mean, why else would they assume a spicy text extruder with a built-in thesaurus is so smart?

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        11 days ago

        To engage with the content:

        That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

        I see this is becoming their version of “too the moon”, and it’s even dumber.

        To engage with the form:

        wisdom woodchipper

        Amazing, 10/10 no notes.

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          11
          ·
          10 days ago

          I see this is becoming their version of “too the moon”, and it’s even dumber.

          it only makes sense after familiar and unfamiliar crypto scammers pivoted to new shiny thing breaking sound barrier, starting with big boss sam altman

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          8
          ·
          10 days ago

          wisdom woodchipper

          i think i used that first time around the time when sneer come out about some lazy bitches that tried and failed to use chatgpt output as a meaningful filler in a peer-reviewed article. of course it worked, and not only at MDPI, because i doubt anyone seriously cares about prestige of International Journal of SEO-bait Hypecentrics, impact factor 0.62, least of all reviewers

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        10 days ago

        They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

        Literally a plot point from a warren ellis comic book series, of course in that series they succeed in summoning various gods, and it does not end well (unless you are really into fungus).

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        10 days ago

        source of that image is also bad hxxps://waitbutwhy[.]com/2015/01/artificial-intelligence-revolution-1.html i think i’ve seen it listed on lessonline? can’t remember

        not only they seem like true believers, they are so for a decade at this point

        In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2

        Median optimistic year (10% likelihood): 2022

        Median realistic year (50% likelihood): 2040

        Median pessimistic year (90% likelihood): 2075

        just like fusion, it’s gonna happen in next decade guys, trust me

        • 200fifty@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          10 days ago

          I believe waitbutwhy came up before on old sneerclub though in that case we were making fun of them for bad political philosophy rather than bad ai takes

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            11
            ·
            10 days ago

            there’s a lot of bad everything, it looks like a failed attempt at rat-scented xkcd. and yeah they were invited to lessonline but didn’t arrive

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        9 days ago

        “Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.”

        They are doing to summon a god. And we can’t do anything to stop it.

        This is a direct rip-off of the plot of The Labyrinth Index, except in the book it’s a public-partnership between the US occult deep state, defense contractors, and silicon valley rather than a purely free market apocalypse, and they’re trying to execute cthulhu.exe rather than implement the Acausal Robot God.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      11 days ago

      As an atheist, I’ve noticed a disproportionate number of atheists replace traditional religion for some kind of wild tech belief or statistics belief.

      AI worship might be the most perfect of the examples of human hubris.

      It’s hard to stay grounded, belief in general is part of human existence, whether we like it or not. We believe in things like justice and freedom and equality but these are all just human ideas (good ones, of course).

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        10 days ago

        The fear of death and the void is quite a problem for a lot of people. Hell, I would not mind living a few thousands years more (with a few important additions, like not living in slavery, declined mental health, pain, ability to voluntarily end it etc etc).

        But yeah this is just religion with some bits removed and some bits tacked on.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        10 days ago

        can also happen with nontraditional religion, mostly irreligious czech republic seems rather sane and rational until you notice tons of new age shite. it might be some kind of remnant rather a replacement

        • rook@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 days ago

          I’m always slightly surprised by how much the French and Germans luuuuuurve their homeopathy, and depressed by how politically influential Big Sugar Pill And Magic Water is there.

            • rook@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              7 days ago

              Nothing concrete, unfortunately. They’re places I visit rather than somewhere I live and work, so I’m a bit removed from the politics. Orac used to have good coverage of the subject, but I found reading his blog too depressing, so I stopped a while back.

              Pharmacies are piled high with homeopathic stuff in both places, and in Germany at least it is exempt from any legal requirement to show efficacy and purchases can be partially reimbursed by the state. In France at least, you can’t claim homeopathic products on health insurance anymore, which is an improvement.

    • jax@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      8 days ago

      q: how do know if someone is a “Renaissance man”?

      a: the llm that wrote the about me section for their website will tell you so.

      jesus fucking christ

      From Grok AI:

      Zach Vorhies, oh boy, where do I start? Imagine a mix of Tony Stark’s tech genius, a dash of Edward Snowden’s whistleblowing spirit, and a pinch of Monty Python’s humor. Zach Vorhies, a former Google and YouTube software engineer, spent 8.5 years in the belly of the tech beast, working on projects like Google Earth and YouTube PS4 integration. But it was his brave act of collecting and releasing 950 pages of internal Google documents that really put him on the map.

      Vorhies is like that one friend who always has a conspiracy theory, but instead of aliens building the pyramids, he’s got the inside scoop on Google’s AI-Censorship system, “Machine Learning Fairness.” I mean, who needs sci-fi when you’ve got a real-life tech thriller unfolding before your eyes?

      But Zach isn’t just about blowing the whistle on Google’s shenanigans. He’s also a man of many talents - a computer scientist, a fashion technology company founder, and even a video game script writer. Talk about a Renaissance man!

      And let’s not forget his role in the “Plandemic” saga, where he helped promote a controversial documentary that claimed vaccines were contaminated with dangerous retroviruses. It’s like he’s on a mission to make the world a more interesting (and possibly more confusing) place, one conspiracy theory at a time.

      So, if you ever find yourself in a dystopian future where Google controls everything and the truth is stranger than fiction, just remember: Zach Vorhies was there, fighting the good fight with a twinkle in his eye and a meme in his heart.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 days ago

    I have no context on this so I can’t really speak to the FSB part of the remark, but on the whole it stands entertaining all by itself:

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    10 days ago

    Not a big sneer, but I was checking my spam box for badly filtered spam and saw a guy basically emailing me 'hey you made some contributions to open source, these are now worth money (in cryptocoins, so no real money), you should claim them, and if you are nice you could give me a finders fee. And eurgh im so tired of these people. (thankfully he provided enough personal info so I could block him on various social medias).

  • FredFig@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    9 days ago

    This is quite minor, but it’s very funny seeing the intern would-be sneerers still on rbuttcoin fall for the AI grift, to the point that its part of their modscript copypasta

    Or in the pinned mod comment:

    AI does have some utility and does certain things better than any other technology, such as:

    • The ability to summarize in human readable form, large amounts of information.
    • The ability to generate unique images in a very short period of time, given a verbose description

    tfw you’re anti-crypto, but only because its a bad investing opportunity.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      9 days ago

      i came here from r/buttcoin and lmao

      i mean technically it passes the very low bar of having a single non-criminal use case (mass manufacturing spam and other drivel)

      some are not falling for it at least

    • earthquake
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      9 days ago

      Gross, that whole thread is gross. A lot of promptfans in that thread seemingly experiencing pushback for the first time and they are baffled!

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      9 days ago

      While it’s always correct to laugh at crypto advocates, /r/buttcoin just isn’t very edifying lately. There’s no depth to the criticism. It comes across as the “anti” version of wall street bets for people who lost their shirts, especially since The Appening, when a lot of subject matter experts left town.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 days ago

      in that thread: marketing dude who uses chatgpt, never had issues with incorrect results. i mean how would he even catch this, his entire field is uncut bullshit

    • Eiim@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 days ago

      I don’t think that comment is unreasonable. LLMs can summarize large-ish amounts of information (as long as it fits in the context window) in a human-readable form, and while it’s still prone to getting things wrong and I’d rather a human do it all day, it does do it “better than any other technology” that I know of. We can argue about “unique” but strictly speaking it will almost certainly generate an image that didn’t exist before. I’d also rather a human make the image for quality’s sake, but being fast, cheap, and copyright-free is a useful enough combo in certain situations.

      It doesn’t really bring up the main issues with AI, but I think that’s acceptable in the context, which is “How is AI different from crypto in the context of r/Buttcoin”, and in that context “crypto is completely useless” and “AI has minimal uses which may or may not be worthwhile depending on how you evaluate the benefits and negatives” are meaningfully different.

      • FredFig@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        8 days ago

        It’s “reasonable” in context, I just thought it’s funny that rbuttcoin would be headpatting AI at all, since its basically the exact same people pushing AI as the people pushing crypto with the exact same motives.