MerryChristmas [any]

  • 0 Posts
  • 291 Comments
Joined 4 years ago
cake
Cake day: August 11th, 2020

help-circle






  • I think you’ve seen me whine about my work enough to know my feelings on offices and meaningless jobs, but I’ve also hit the burnout point in every line of work I’ve ever engaged in and I don’t want to end up that way with compassion.

    So I guess my advice here is to keep an open mind as to what “helping people” means. Think about what you’re good at and where it might be put to use in your community. You could take a part-time job stocking shelves at a grocery store, for instance, and while it’s maybe not the direct action that you’re looking for, people need to eat and you need a paycheck. It doesn’t have to be your identity - just the thing you’re doing right now to get by without hurting anyone while you figure things out.

    This is more or less my plan now that we’ve finally got some savings. I’m hoping that without the shame of office life weighing me down, I’ll feel a little more free to contribute outside of work?









  • I agree with you on the empathy issue, but here’s where I hesitate to say it should be rejected outright:

    I’ve had some interesting conversations with myself using GPT4 as a sort of funhouse mirror, and even though I recognize that it’s just a distorted reflection… I’d still feel guilty if I were to behave abusively towards it? And I think maybe that’s healthy. We shouldn’t roleplay engaging in abuse without real-world consequences if for no other reason than because it makes us more likely to engage in abuse when there are actual stakes.

    In this scenario, the ultimate object of my empathy is my own cognitive projection, but the LLM is still the facilitator through which the empathy happens. While there is a very real danger of getting too caught up in that empathy, isn’t there also a danger in rejecting that empathetic impulse or letting it go unexamined?


  • Some random thoughts I jotted down while I was listening to Part 1:

    • I gotta show my wife this fungi episode they’re referencing.

    • On representing neurons with binary, I have to admit I struggle with this one as well. I am trying to think of zero as an abstraction of infinity approaching one end of a closed set and one as an abstraction of infinity approaching the other end. We can zoom in on a point in that set for greater specificity, but the further we zoom the less information we have about how that point relates to the rest of the set in that given moment.

    What’s counterintuitive is that this is a top-down approach and a bottom up approach at the same time. Zero is defined by its relationship to one and one is defined by its relationship to zero. We don’t have a true measure of distance between zero and one without an additional point existing outside of the set to serve as a frame of reference, but then that creates a new set of zero to one.

    I’m not sure where I’m going with all of this, but it’s left me confused in a good way. I’m hoping they dig into this a little more in Part 2. I’m also hoping I have the math literacy to understand it because I didn’t start taking an interest in math until well after I was done taking math classes…

    • The conversation surrounding autism, AI and the validity of alien minds is particularly relatable to me, and I’d also like to add schizophrenia to the discussion. Both autism and schizophrenia are spectrum disorders. The traits that make up these disorders exist to varying degrees in the general population, but we only say someone is autistic or schizophrenic when those traits reach a threshold where they are no longer considered desirable by neurotypical society. I have an intuition that some combination of these traits are an inherent part of the human conscious experience.

    Will these traits be replicated in artificial minds to varying degrees as we begin to develop intelligences for more specialized tasks? And if so, how might that change the way these traits are valued more generally?

    • I’m really enjoying how cautious the hosts are toward anthropomorphizing AI while still addressing the ethical questions that arise from that anthropomorphism. Open to the possibilities without lending credit to them. I think that is an attitude that ought to be cultivated on the left.

    I’ve got Part 2 in my queue!