My right hand gets colder faster than my left hand, for some reason. It happened yesterday, and it happened again today. I can think of reasons — other than there being something wrong with me — for this to be happening, but it’s still upsetting.

Now, to the title.

I’m not a proponent of AI, necessarily; I think AI art is weird — both morally and aesthetically — and I don’t think AI can be trusted to make huge decisions yet — think medical diagnosis, legal analysis, research. Moreover, I find the idea of a general artificial intelligence utterly terrifying. As a scientist (and I do consider myself a man of science), I can’t help but marvel at the possibilities, but as a person? It’s horrifying. Just because something is scary, doesn’t mean it’s bad; still, this kind of thing is extremely dangerous and unpredictable, so I think it’s best to be sceptical and increasingly cautious.

What I’d like to talk about, though, are AI search engines; or rather, AI as a search engine.

I use ChatGPT (and Perplexity) extensively when I’m looking for specific information or for something to bounce ideas off of. I think they’re great tools for that. It’s still important to be careful, to double check what it says (it’s often wrong or inaccurate), but these tools provide great jumping off points.

More and more, browsers are adding AI features. At first, I was sceptical, but more and more I welcome this kind of change, as long as it’s optional. A big problem I have (chronic) is that I hate bloat. I hate when my browser has or does things that I don’t want or need it to do. Here’s a screenshot of my Firefox as I write this:

My Firefox

Rather minimal, right?

This is what I’m talking about. I want my stuff to be what I want, and nothing more. Very GNU of me, I know.

Do I want an AI assistant? No.

Do I want an easy way to reach an AI? Yes, actually. I do want a button or a shortcut or something like that that I can use to just ask Perplexity or something. Is AI integration the best way to do this? I don’t know, I’m not a software engineer (not that software engineers know, either, but I digress).

I’m looking forward to seeing how technology changes.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 days ago
    I much prefer to run AI on my hardware. I find it useful. I run much larger models on enthusiast edge level hardware. I've also explored this a lot over the last year and a half.

    The more you interact with AI, the stronger of a profile it builds about you. The way that the model is able to synthesize a response based on the dataset is similar to what it can infer in a profile about you.

    The first thing to understand is that everything in AI is roleplaying. Even if you do not see the full starting instruction loaded in a prompt from a service like Chat GPT or Perplexity, there is still an instruction loaded that says “You’re a helpful assistant,” or similar. This is still a roleplaying instruction. When the prompt is generated, the model is trying to determine what an assistant should know and how an assistant should reply.

    The model cannot even decipher the difference between Name-1 and Name-2 (the basal syntax for human and bot respectively). The task of determining what an assistant should or should not know requires a profile of characteristics that is no different than the one built for the end user. There are many assumptions made when this profile begins.

    The tensor complexity that connects the words and ideas of the prompt to an answer works in a similar way with each profile. It is like how I can ask you if you own a lawnmower and with a high statistical probability predict if you own a home, are married, and have kids. Because of how everything is connected in a model, the moment you start interacting with the prompt, everything you say, the grammar, the word choice, the subject, the mistakes; all of it goes into creating a more statistically accurate profile about you. However, because everything in a model is connected, when asked under the right configuration, the model can answer with anything about you. It is about 80% accurate in my testing once the prompt is around 5k-10k tokens in length. Things that you never talked about or hinted at become possible to question and extract from a profile. Simply loading in a block of text does not have the same effect as interactive dialogue.

    This profile could be used to influence a person on invasive levels. It is a manipulative advertising and political influence tool unlike anything that the present stalkerware data mining junk is capable of creating.

    I don’t worry about any danger with generative image or textgen AI. The dangerous stuff is image recognition and that is a whole different thing really. Flying murder drones are something to worry about.

    I think we are also there when it comes to many uses for AI like science and medicine. Read the first page of documentation for the Transformers library on Hugging Face. This is the basis of all publicly available AI tools. The library clearly states that it is an incomplete implementation that prioritizes accessibility for codebase modifications over completeness or performance.

    The scope of AI capabilities is very narrow. It takes a true expert to wield that scope effectively in a niche space. Under the surface, these are extremely complex systems. There are many compromises made in order to make the tools available to the general public.

    We are in the age of the first microprocessors of the early 1980’s at present. The scope is like an Apple I kit computer’s usefulness. Still, these early computers have a whole lot in common with each CPU core running on your devices. In the early days, the microprocessor required a lot of peripheral hardware to be useful, often requiring several microprocessors in a single machine just to do basic personal computing and business tasks. This is the same level of scope with AI in the present. Eventually, all the peripheral model loader code like, multi model agents, database integration, and function calling will be integrated into a nearly monolithic system and that will surpass humans in capabilities. It is only a matter of time. While models are usually 80%-90% accurate, for humans, 70% is generally considered average. We tend to forget that.

    We also have the tendency to build mythologies like the machine gods of the industrial revolution. The majority of fear about AI as dangerous is based on this same illogical mythos. The idea of inevitable infanticide from a superior sentient entity is nonsense. By this logic, the Earth should be a monoculture of the dominant lifeform. I think under the surface, we are collectively scared of the coming reckoning with ourselves and our many contradictions and fallacies that will come to light when a better reasoned sentient lifeform emerges. We don’t want to be held accountable to the realities of our true subsentient ineptitude.

    • gon [he]OPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Yeah. To be fair, I’m very ignorant when it comes to this subject, can’t really comment on this too much.