• 72 Posts
  • 1.51K Comments
Joined 1 year ago
cake
Cake day: August 28th, 2023

help-circle
  • Ok, good luck on your project. We’ll talk when any given peertube project (based on the donation based funding model alone) reaches break even.

    I swear I’ve reviewed the finances about this a million times over. Funding models in their current form just don’t work. Content creators getting free hosting from YouTube with huge audiences are struggling to keep themselves afloat. But whatever, good luck on your project I suppose. We really need YouTube’s monopoly to end, so ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯


  • Sure! Remember though, that you are funding this project using your own money. How much does your server cost? How much does the electricity to run your server cost? You would need Gbps speed internet. How much does that cost?

    You would be funding this out of your own pocket. Thank you for doing that! Would there be a thousand more people willing to do this? What happens if you lose your job? What happens to the server?

    As you can see, this is not a technological issue, but a funding one. If you can generate funding for this somehow, you have a very viable model! IF you can find the funding.

    I am saying that funding this would be difficult. I see people just yapping about FOSS, but not funding it when the time comes.


  • Ok, so you just ignore the reports and continue to coast on feels over reals. Cool.

    I didn’t. I went through your links. Your links however, pointed at a problem with the environment our LLMs are in instead of the LLMs themselves. The code one, where the LLM invents package names is not the LLMs fault. Can you accurately come up with package names just from memory? No. Neither can the LLM. Give the LLM the functionality to look up npm’s database. Give it the functionality to read the docs and then look at what it can do. I have done this myself (manually giving it this information), and it has been a beast.

    As for the link in the reply, it’s a blog post about anecdotal evidence. Come on now… I literally have personal anecdotal evidence to parry this.

    But whatever, you’re always going to go “AI bad AI bad AI bad” till it takes your job. I really don’t understand why AI denialism is so prevalent on lemmy, a leftist platform, where we should be discussing about seizing the new means of production instead of denying its existence.

    Regardless, I won’t contribute to this thread any further, because I believe I’ve made my point.




  • Naah I thought about this before and came to the conclusion that this isn’t that bright of an idea. Here’s why.

    Why’s video hosting so expensive in the first place? Because it needs a lot of computational power, storage and bandwidth. All three things that a mobile phone does not have. If you make your client’s mobile phone do this stuff, then you’re going to slow down their phone, make it heat up more, make it degrade faster (because it would be drawing power from the battery) and take up a huge chunk of their bandwidth.

    Think of how video calls drain battery really fast. It’s just shifting the costs of hosting from the hosting side to the consumer side while making the entire operation a lot more complicated and a lot more inefficient.



  • You’re ignoring the whole Job of a judge, where they put the actions and laws into a procedural, historical and social context (something which LLMs can’t emulate) to reach a verdict.

    LLMs would have no problem doing any of this. There’s a discernible pattern in any judge’s verdict. LLMs can easily pick this pattern up.

    You know what’s the quality of the code LLMs shit out?

    LLMs in their current form are “spitting out” code in a very literal way. Actual programmers never do that. No one is smart enough to code by intuition. We write code, take a look at it, run it, see warnings/errors if any, fix them and repeat. No programmer writes code and gets it correct in the first try itself.

    LLMs till now have had their hands tied behind their backs. They haven’t been able to run the code by themselves at all. They haven’t been able to do recursive reasoning. TILL NOW.

    The new O1 model (I think) is able to do that. It’ll just get better from here. Look at the sudden increase in the quality of code output. There’s a very strong reason as to why I believe this as well.

    I heavily use LLMs for my code. They seem to write shit code in the first pass. I give it the output, the issues with the code, semantic errors if any and so on. By the third or fourth time I get back to it, the code it writes is perfect. I have stopped needing to manually type out comments and so on. LLMs do that for me now (of course, I supervise what it writes n don’t blindly trust it). Using LLMs has sped up my coding at least by 4 times (and I’m not even using a fine tuned model).

    I also don’t agree with your assessment. If an LLM passes a perfect law exam (a thing that doesn’t really exist) and afterwards only invents laws and precedent cases, it’s still useless.

    There’s no reason as to why it would do that. The underlying function behind verdicts/legal arguments has been the same, and will remain the same, because it’s based on logic and human morals. Tackling morals is easy because LLMs have been trained on human data. Their morals are a reflection of ours. If we want to specify our morals explicitly, then we could make them law (and we already have for the ones that matter most), which makes stuff even easier.




  • Depends on how you look at it. These models weren’t trained in a vacuum. They were trained on data generated by humans. They are the amalgamation of all human art throughout human history. They are a reflection of us, the way a child is a reflection of their parents.

    That being said, I am very excited for art generated by collaboration between humans and these models. I for one would love Castle Swimmer (a webcomic) to be turned into an animation. Currently, no one will fund any such project. With video gen models however, I’m very positive we would get to see this.

    The original author’s story is still there. Her characters are there, her dialogues are there. They’re just brought to life visually. I still find a lot of humanity in this.






  • It’s weird, but I think I always knew. So we didn’t have sex Ed in school (I lived in a very conservative country). My parents also didn’t have “the talk” with me ever. This is why I got to know about what sex was somewhere around the age of 11-12.

    Now, I was masturbating n stuff WAAAAAAAY before this. And I knew what fantasies I had when masturbating lmao. They very clearly were boys from school.

    Anyway, so after I got to know what sex actually meant (kinda), I quickly discovered porn, when I tried looking for a sex demonstration or something (I initially thought that u had to go to the hospital to perform this “sex procedure” in front of the doctors to make a baby or whatever). Anyway, so I discovered straight porn. At the same time, people at school had started using the slur, “gay”.

    I knew it meant man+man instead of woman+man. I looked it up, and immediately understood that I was indeed gay.

    Now ofc, acceptance took almost my entire teenage life. I hated myself for being who I was and so on. Wasn’t very nice.