• j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    9 months ago

    People fundamentally fail to understand what AI is useful for and what it is doing. It is not anything like an Artificial General Intelligence. It is like a better way to search for information and interface with it. Just use open source offline AI, not the proprietary crap. The real issue is not what the AI can create. This is no different than what a person is capable of when they are aware of the same content, albeit code, art, music, etc. Just because I am inspired by something, due to my awareness does not give the original inspirational source a right to my thoughts or products. AI works at the same level. It is an aggregate of all content, but contains none of the original works any more than a person that knows about the paintings and works of an artist and tries to paint something in a similar style.

    The real issue that people fail to talk about is that AI can synthesize an enormous amount of data about a person after prolonged engagement. This is like open port access directly into your subconscious brain and there are plenty of levers and switches it can twist and toggle. Giving this kind of interpersonal access to a proprietary stalkerware system where parts of humans are whored out to the highest bidder for exploitation, that is totally insane. This type of data can manipulate people in a way that will sound like science fiction until it normalizes. Proprietary AI is criminal in its potential to manipulate and exploit especially in the political sphere.

    • pjhenry1216@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      It’s not the same as an artist being inspired. It’s more like an artist painting something in the style of someone else. AI can generate anything new and it doesn’t transform things in its own way. It just copies and melds together. Nothing about it is really it’s own. It’s just a biased algorithm putting things together. Moreover, the artist could actually forget what the painting looks like, but still be inspired. If you erase something from the LLM, it will change it’s output. It’s basically more of a constant copying.

      That analogy is what a bunch of people who want to sell AI art try to pitch. It’s the difference between content and art.

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        9 months ago

        It is possible to do more of what I would call inspired. Models are not just restricted to “in the style of” in that unrelated abstract ideas can be mixed to create something altogether new. It takes a good model and training, but like this is just from 15 minutes of messing around in Stable Diffusion trying to make Van Gogh do his best impression of Bob Ross. I’m adding all kinds of inspirational concepts all the way to emotions and contrasting them and doing this in layers of refinement using a series of images. I’m not very practiced at this. I would call this an artist’s tool. Yes it changes the paradigm, but people need to get over their resistance to change as this is evolution; adapt or die.

        I used tricks like image to image, and this was not my best result as far as Van Gogh:Bob Ross, but I like it most of the 150 images I made.

        Positive: texture, (in the style of Vincent van Gogh:Bob Ross), [nasa], swirl, spiral, foreground tree, mountain drive, kindness, love, masterclass, (abstract:1.8), painting, dark, silhouette, swirls, texture, branches, ocean waves, anger, lonely

        Negative: red, (signature), multiple moons, buildings, modern, structures, guard rail, snow, realism, yellow, orange, detailed mountains, left side line, stretchy stars, brake lights, forest

        Seed: 1053938996 Model: Absolute Reality V1.6525

        • pjhenry1216@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          9 months ago

          I think you’re missing the point. You’re still generating something purely based only on other things. There’s nothing of an artist in there. There’s no message. There’s no art. You created content. You aren’t in there. And I know this seems odd because there’s no way to know this without extra knowledge, but something is lost. And it’s not an artist’s tool. It’s a non-artist’s tool.

          • j4k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            9 months ago

            You are wrong because you are arbitrary in your assumptions. I have spent years painting cars and doing graphics and airbrush professionally. I am a Maker. I can craft with almost any medium both digital and physical. Once upon a time, anyone that did not craft all of their own colors and base media were considered fake artists. This is a tool. I can create exponentially more than you to search for a better composition. So can you, so can everyone else. The stupid people will resist this change while intelligent people will learn the tech, adapt, and raise everyone’s expectations about what art really is. This is the fundamental shift happening right now. The value of time investment has changed drastically. If you can’t adapt to that change you only hurt yourself in the end. Open Source offline AI at a useful level is around 6 months old. It is at the stage where products targeting end users are still getting developed. In the next 2 years, everything is going to be different. In 10 years the quality of art media will make the present look like child’s play. Feel free to plan your own obsolescence. This is the biggest game changer since the internet of the late 90’s. It is funny how people that have not tried it or really looked into what this can be used for have strong opinions about it, or put their head in the sand when they are told. I got it to learn computer science so that I can upload a book as a database and ask the book plain text questions, and so that I could do some interesting CAD techniques in Blender. The second I saw I could question a book offline with citations, I was sold.

            • pjhenry1216@kbin.social
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              9 months ago

              I’m not arbitrary. I explicitly gave a reasonable difference between content and art. You can create content without soul, that’s fine. I’m not saying you need to mix your own paint. I’m saying art is inherently human by definition. You can pump out all the content you want, but it will just make finding decent art that much worse. It’s like saying ChatGPT can pump out android apps more quickly, but I don’t think anyone would argue it’d raise the quality of the Android app markets.

              You’re just thinking of everything from the point of view of middle management. Quantity over quality.

              When you remove humans from the equation, it’s not art. It’s content. It’s disposable fluff. It’s mass produced. It’s soulless. But sure, think yourself intelligent because you literally put money over anything else. Why don’t you just flood the market with remakes and remasters at this point. It fits your argument.

              You can’t raise an expectation of art by literally removing any meaning to it.

              • j4k3@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                9 months ago

                You need to learn and try this. You don’t know what you don’t know and you are making a lot of bad assumptions. The result is not random. The creativity is understanding what the words do and the process just like any other art. There is a lot of nuance. Every word I chose has an impact in both sets of prompts. This is a the result of taking the best image of 60, and and then using it to generate a chain where I slowly adjusted a whole bunch of tools to make this output. I got to the point where each new iteration has very little change to the final image. The word order matters, the “()” brackets strengthen the power and even more if it includes a number like “:1.8” The “[ ]” makes something weaker. Words are more powerful at the beginning and the last word. The placement of composition, technique, and metadata words matters. There are dozens of other techniques just when it comes to the basic settings, and there are limitless ways to alter the output learning about how the AI actually works. This is similar to what digital photography did to film photography. Is it going to kill old techniques? it will completely change the paradigm.

                With the best outputs from AI, you can’t spot the difference unless you are told; no one can. This is the only thing that matters in the end. Art is made to be looked at, and if the viewer can’t tell the difference, that is the only difference that matters. I’m not ‘the enemy,’ this isn’t a team sport or black and white. I’m just a regular dude actually using this to improve myself. I’ve used it enough to know what I’m doing, and know what I’m talking about, but like, I barely touch image generation stuff. If I spent a week putting together the toolchains better I could produce a much better image that what I posted.

                • pjhenry1216@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  9 months ago

                  Every word has an impact that you can’t predict. So no. All your words and condescending tone speak more about what you don’t know. You are are hitting a button and continually trying new things until you get the results from the AI that you want. That is not the same. Especially since you’ll start just changing things just because your original intent didn’t match what you want so you’ll start reaching for other synonyms and the like.

                  It simply isn’t the same as human inspiration. There’s a reason courts voted against giving rights to AI generated art to the prompt creator. Their reasoning holds.

                  Just because someone might not be able to tell the difference between a forgery and the real thing doesn’t make them both equally art.

                  Same holds true to your example which I literally already used and explained why it didn’t work. Are you even reading my comments or just ranting?

                  • j4k3@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    9 months ago

                    You have no clue what you are talking about. I can dial in very specific results anywhere I want and at any point with the tools. I can mask any area and control what it does through prompting. I only used basic tools for a few minutes with my most simple tool. I could open up ComfyUI and make a much more detailed network. I can figure out the new Open Dream GUI and break apart images into mask layers and generate whatever I want on these. Or if I cared anything about it, I would do all of it myself on the command line like I am doing with text generative AI. If the only tools you’ve seen are those posted by proprietary companies online, you have no clue how this really works or what is possible.

    • 👁️👄👁️
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      There is nothing in LLMs that is able to verify the truth. They should not be used for accurate information unless we make some sort of technological breakthrough on that front. It’s really good at generating plausible text though.

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 months ago

        People, and the internet are no different. The vast majority of information that exists is incomplete or wrong at some level. Skepticism is always required but assessing any medium by its performance without premeditated bias is the only intelligent approach that can grow with improving technology. Very few people are running the larger models (like a 65B or larger) that they fully control in an environment where they control every aspect of the LLM. I have such a setup on my hardware running offline. On its own my system is ~95% accurate on the tasks I use it for and it is more accurate at these than results I find when searching the internet.

        There are already open source offline models specifically designed to work on scientific white paper archives where every result cites the source from its database.

        Agents are a class of AI where the AI is running a multi-model system and where one model can send the prompt to more specialized models or a series of models equip to check and verify a response and do things like cite sources or verify against a database.