WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’
WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’
It’s reasonable to refer to unsupervised learning as “learning on its own”.
I don’t think a computer being given the command to crunch through a bunch of matrices is in any way analogous to what is meant by the phrase “learning on one’s own”.
That’s your prerogative
I’m not sure what you’re trying to say here. Making a machine do math is pretty significantly different than self directed pedagogy in like every way I can think to compare them.
You do know that the entire field of study is called “machine learning”, right?
Doesn’t that give you pause?
To keep us on topic, the thing I hope you’re referring to as unsupervised learning is when someone creates a model by feeding a big dataset into an algorithm. Within the field it’s referred to as training, not learning. Outside the field training and learning are used to describe different processes.
Without getting into the argument about Is Our Computers Learning, you can’t possibly call that self directed. It’s just not.
The closest analogy would be externally directed learning where a teacher is trying to teach you how to write in a particular style or a particular format and assigns you an essay to be written using only sources from a curated list (like when we had to write about Vietnam in high school us history!).
Of course, that’s not what’s happening when someone asks for bing to make a picture of the mucinex booger man with big tits or give proof that the holodomor happened (two extremely funny examples I saw in the wild last week). When the average person uses generative algorithms to get some output, the computer is just regurgitating something that fits the dataset of the model it was trained on.
There’s a paper floating around somewhere about how the field of machine learning needs to tighten up how it uses language. Iirc it was written before even deep dream and basically said “these jerks can’t even use words to accurately describe what they’re doing and we expect them to make a thinking machine?” Lo and behold, the “artificial intelligence” we have hallucinates details and makes up claims and sources.
Doesn’t what give me pause?
“Learning” is absolutely used within the field. It’s preposterous to claim otherwise.
Machine learning
Deep learning
Supervised learning
Unsupervised learning
Reinforcement learning
Transfer learning
Active learning
Meta-learning
Etc etc
Do you think journalists came up with all these terms? “Learning” is the go-to word for training methodologies. Yeah, it’s training, but you know what we say about a subject that is being trained? (well, one would hope)
In fact, I’d argue that if “learning” is inappropriate, then so is “training”.
I just learned that there is no undo button on Gboard and I can not be bothered to type all that again. Just imagine I said something clever.
i’m not sure of the point youre trying to make in this post.
i’m well aware of how loosely language is used in the field of machine learning.
in an attempt to stay on topic: training a model on a dataset can not and should not be described as “learning on its own”. no computer “decided” to do the “learning”. what happened is a person invoked commands to make the computer do a bunch of matrix arithmetic on a big dataset with a model as the output. later on, some other people queried the model and observed the output. that’s not learning on its own.
let me give you an example: if i give ffmpeg a bunch of stuff to encode overnight, would you say my computer is making film on its own? of course you wouldn’t, that’s absurd.