In which we are joined by Ezri of Swampside Chats, to continue our discussion of "Computer Power and Human Reason: From Judgement to Calculation" by Joseph Weizenbaum.
Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author's ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom.
Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. It is the capacity to choose that ultimately makes one a human being. Choice, however, is the product of judgment, not calculation. Comprehensive human judgment is able to include non-mathematical factors such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for mathematical comparison.
If you like the show, consider supporting us on Patreon.
Links:
Computer Power and Human Reason on Wikipedia
Weizenbaum's Nightmares, on The Guardian
Inside the Very Human Origin of the Term “Artificial Intelligence”
General Intellect Unit on iTunes
http://generalintellectunit.net
Support the show on Patreon
https://twitter.com/giunitpod
General Intellect Unit on Facebook
General Intellect Unit on archive.org
Emancipation Network
I actually think they are doing slight disservice on how the neurons work in both neural nets and living brain. While you can say single node collapses into 1/0, actually due to large matrix operations its more like 100 neurons collapse into 5.5, 10.7, etc on some following layers neuron. Crucially though, real neurons can have 10000 connections, wildly outstripping forward feeding NN, and they are as you say not so binary.
I feel like those questions are still describing human mind, not ai capabilities. Despite being more positive about ai (its all ml ) as technological achievements, i don’t think they are even scratching the surface of consciousness of a dog, despite dog not being able to speak. They are performing likelihood operations on the whole internet, so if your conversations are with redditors, ai prolly can simulate you whole reddit thread, but not why people saying things they are saying.
And yes anthromorphizing ai is human mind empathy (you can empathize with plush toys for gods sake), that people should reject (i think outright, but maybe just resist).
I feel like chatgpt is close as something like expanse ai, you can talk to it, make it do stuff, but its still dumb as hell.
Its an impressive thing, and will help people, but thinking its your friend is wild
TLDR. Hexbear kneejerk reaction to general ai claims is correct, rejection that it is a fucking impressive thing - is not correct. People getting caught in being empathy machine with a data center deserve our empathy
I agree with you on the empathy issue, but here’s where I hesitate to say it should be rejected outright:
I’ve had some interesting conversations with myself using GPT4 as a sort of funhouse mirror, and even though I recognize that it’s just a distorted reflection… I’d still feel guilty if I were to behave abusively towards it? And I think maybe that’s healthy. We shouldn’t roleplay engaging in abuse without real-world consequences if for no other reason than because it makes us more likely to engage in abuse when there are actual stakes.
In this scenario, the ultimate object of my empathy is my own cognitive projection, but the LLM is still the facilitator through which the empathy happens. While there is a very real danger of getting too caught up in that empathy, isn’t there also a danger in rejecting that empathetic impulse or letting it go unexamined?
The problem as i see it (and im not a psychologist or whatever) is you dont have feeling towards your mirror for example, your brain adapted to your reflection not being a real thing at like 2-3 years.
Brain doesn’t have natural defenses against empathising with llm (even with eliza people were ready to go tell the program their secrets). And feeling aren’t logical (as in, you can know its bullshit and still feel some fulfillment from such conversations). They will (in the podcast) prolly discuss what author thought of that phenomenon with eliza, but i can see on a large scale that being a problem with atomized society, that noticeable amount of people will drop out into llm fantasies.
I don’t think there is a danger in rejecting empathy. I like some plush toys from my childhood, i would be hurt if something happened to them, i wouldn’t hurt them, but i also don’t empathize with them.