While many social media users are blaming the pedestrian for reportedly crossing against the light, the incident highlights the challenge autonomous driving faces in complex situations.
i recommend trying https://www.moralmachine.net/ and answering 13 questions to get some bigger picture. it will take you no more than 10 minutes.
you may find out that the problem is not as simple as 4 word soundbite.
In this week’s Science magazine, a group of computer scientists and psychologists explain how they conducted six online surveys of United States residents last year between June and November that asked people how they believed autonomous vehicles should behave. The researchers found that respondents generally thought self-driving cars should be programmed to make decisions for the greatest good.
Sort of. Through a series of quizzes that present unpalatable options that amount to saving or sacrificing yourself — and the lives of fellow passengers who may be family members — to spare others, the researchers, not surprisingly, found that people would rather stay alive.
Can you swerve without hitting a person? Then swerve, else stay. This means that the car will act predictable and in the long run that is safer for everyone.
can you not enter the road in front of incoming vehicle while ignoring the red light? if you can, then don’t. that means that pedestrians will act predictably and in the long run it will be safer for everyone.
Is every scenario on that site a case of brake failure? As a presumably electric vehicle it should be able to use regenerative breaking to stop or slow, or even rub against the guardrails in the side in each instance I saw
There’s also no accounting for probabilities or magnitude of harm, any attempt to warm anyone, or the plethora of bad decisions required to put the car going what must be highway speeds down a city stroad with a sudden, undetectable complete brake system failure.
This “experiment” is pure, unadulterated propaganda.
Oh, and that’s not even accounting for the intersection of this concept and negative externalities. If you’re picking an “AI” driving system for your car, do you pick the socially responsible one, or the one that prioritizes your well-being as the owner? What choice do you think most people pick in this instance?
“here, take these extremely specific made up scenarios that PROVE I AM RIGHT UNEQUIVOCALLY except for the fact that all of them are edge cases, do not represent any of the actual fatalities we have seen and in no way are any of them representative of the case that sparked the whole discussion”
I think I’ll skip the “Ai is always good and you’re just too stupid to get why it should be allowed to kill people” website.
This “experiment” is pure, unadulterated propaganda.
yeah, you didn’t get it at all…
What choice do you think most people pick in this instance?
oh hey, you are starting to get it 😂
maybe, when you finally understand what someone is trying to tell you, act less smug, don’t try to pretend you just got them, and you will look less like a clown.
the experiment is not about technical details, it is trying to convey the message that “what is the right thing to do” is not as easy to establish as you might think.
because yes, most people will tell you to protect more people at the expense of less, but that usually lasts only until the moment when they are part of the smaller group.
While I do agree that there are scenarios that are very complicated, I feel like this website does a very poor job at showing those. Almost every single scenario they show doesn’t make sense at all. Why are there barriers on one side of the road, why does half the crosswalk have a red light while the other half has a green light?
Why are there barriers on one side of the road, why does half the crosswalk have a red light while the other half has a green light?
what, have you never seen a construction on the road? have you never seen a traffic light, that is manually activated?
both of these scenarios are happening on a daily basis in real life.
and they are here so you can think about the decision. is the car occupant’s life more valuable than the life of innocent bystander (eh, is bywalker a word)? does that change when one group is bigger in numbers? does it change when one group is obeying law and the other one not?
Interesting link, thanks. I find this example pretty dumb though. There is a pedestrian crossing street on zebra crossing. Car should, oh I don’t know, stop?
Nevermind, read the description, car has a break problem. In that case try to cause least damage like any normal driver would.
car is broken and cannot stop. otherwise it could just stop in every single one of the presented scenarios and the “moral dilemma” would be pretty boring
90% of the Sophie’s choice hand wringing about this is just nonsense anyway. The scenarios are contrived, exceedingly unlikely, and the entire premise that you can even predict outcomes in these panic scenarios simply does not resemble any real moral framework which actually exists. A self driving car which attempts to predict chaotic probabilities of occupant safety is just as likely to get it wrong and do more damage.
Yes, the meta ethics are interesting, but the idea that this is any more actionable than trolley problems is silly.
Yes, the meta ethics are interesting, but the idea that this is any more actionable than trolley problems is silly.
the point is, that we are reaching the point where trolley problem stops being “interesting theoretical brain teaser” and starts being something to which we have to know the answer.
because we have to know, as in we have to decide, whether we have to flip the switch or not. we have to decide whether we are going to protect these three over that one one. whether this kid has more right to live than the senior, because the senior’s life is almost over anyway. whether that doctor’s life is more valuable than grocery clerk’s one.
and so on.
up until now, there wasn’t really a decision. majority of people have problem controlling the car under normal circumstances, in case of accident, they just hit a break, close their eyes and pray. whatever happens is really just result of chance, there isn’t much philosophy about value of life in play.
there is still some reasoning though, most of us probably won’t steer the car to a group of kindergarten kids on the sidewalk just to protect themselves.
but the car will have more and will be able to evaluate more information than a person can in such short time and the car will be able to react better.
the only thing that remains is, we have to tell him what to do. we have to tell him whose life has bigger value and whose life is worth protecting more. and that is where the trolley problem stop being academic exercise.
The car should be programmed to self-destruct or take out the passengers always. This is the only way it can counter its self-serving bias or conflict of interests. The bonus is that there are fewer deadly machines on the face of the planet and fewer people interested in collateral damage.
Teaching robots to do “collateral damage” would be an excellent path to the Terminator universe.
Make this upfront and clear for all users of these “robotaxis”.
Now the moral conflict becomes very clear: profit vs life. Choose.
i recommend trying https://www.moralmachine.net/ and answering 13 questions to get some bigger picture. it will take you no more than 10 minutes.
you may find out that the problem is not as simple as 4 word soundbite.
In this week’s Science magazine, a group of computer scientists and psychologists explain how they conducted six online surveys of United States residents last year between June and November that asked people how they believed autonomous vehicles should behave. The researchers found that respondents generally thought self-driving cars should be programmed to make decisions for the greatest good.
Sort of. Through a series of quizzes that present unpalatable options that amount to saving or sacrificing yourself — and the lives of fellow passengers who may be family members — to spare others, the researchers, not surprisingly, found that people would rather stay alive.
https://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html
same link: https://archive.is/osWB7
Can you swerve without hitting a person? Then swerve, else stay. This means that the car will act predictable and in the long run that is safer for everyone.
can you not enter the road in front of incoming vehicle while ignoring the red light? if you can, then don’t. that means that pedestrians will act predictably and in the long run it will be safer for everyone.
Is every scenario on that site a case of brake failure? As a presumably electric vehicle it should be able to use regenerative breaking to stop or slow, or even rub against the guardrails in the side in each instance I saw
There’s also no accounting for probabilities or magnitude of harm, any attempt to warm anyone, or the plethora of bad decisions required to put the car going what must be highway speeds down a city stroad with a sudden, undetectable complete brake system failure.
This “experiment” is pure, unadulterated propaganda.
Oh, and that’s not even accounting for the intersection of this concept and negative externalities. If you’re picking an “AI” driving system for your car, do you pick the socially responsible one, or the one that prioritizes your well-being as the owner? What choice do you think most people pick in this instance?
“here, take these extremely specific made up scenarios that PROVE I AM RIGHT UNEQUIVOCALLY except for the fact that all of them are edge cases, do not represent any of the actual fatalities we have seen and in no way are any of them representative of the case that sparked the whole discussion”
I think I’ll skip the “Ai is always good and you’re just too stupid to get why it should be allowed to kill people” website.
yeah, you didn’t get it at all…
oh hey, you are starting to get it 😂
maybe, when you finally understand what someone is trying to tell you, act less smug, don’t try to pretend you just got them, and you will look less like a clown.
the experiment is not about technical details, it is trying to convey the message that “what is the right thing to do” is not as easy to establish as you might think.
because yes, most people will tell you to protect more people at the expense of less, but that usually lasts only until the moment when they are part of the smaller group.
While I do agree that there are scenarios that are very complicated, I feel like this website does a very poor job at showing those. Almost every single scenario they show doesn’t make sense at all. Why are there barriers on one side of the road, why does half the crosswalk have a red light while the other half has a green light?
what, have you never seen a construction on the road? have you never seen a traffic light, that is manually activated?
both of these scenarios are happening on a daily basis in real life.
and they are here so you can think about the decision. is the car occupant’s life more valuable than the life of innocent bystander (eh, is bywalker a word)? does that change when one group is bigger in numbers? does it change when one group is obeying law and the other one not?
i have answered basically same question here: https://lemm.ee/post/36643403/13142015
Interesting link, thanks. I find this example pretty dumb though. There is a pedestrian crossing street on zebra crossing. Car should, oh I don’t know, stop?
Nevermind, read the description, car has a break problem. In that case try to cause least damage like any normal driver would.
Maybe it could scrape against the barriers to slow down without such a sudden stop for the passenger. IRL it’d depend on how well they’re lined up.
car is broken and cannot stop. otherwise it could just stop in every single one of the presented scenarios and the “moral dilemma” would be pretty boring
90% of the Sophie’s choice hand wringing about this is just nonsense anyway. The scenarios are contrived, exceedingly unlikely, and the entire premise that you can even predict outcomes in these panic scenarios simply does not resemble any real moral framework which actually exists. A self driving car which attempts to predict chaotic probabilities of occupant safety is just as likely to get it wrong and do more damage.
Yes, the meta ethics are interesting, but the idea that this is any more actionable than trolley problems is silly.
the point is, that we are reaching the point where trolley problem stops being “interesting theoretical brain teaser” and starts being something to which we have to know the answer.
because we have to know, as in we have to decide, whether we have to flip the switch or not. we have to decide whether we are going to protect these three over that one one. whether this kid has more right to live than the senior, because the senior’s life is almost over anyway. whether that doctor’s life is more valuable than grocery clerk’s one.
and so on.
up until now, there wasn’t really a decision. majority of people have problem controlling the car under normal circumstances, in case of accident, they just hit a break, close their eyes and pray. whatever happens is really just result of chance, there isn’t much philosophy about value of life in play.
there is still some reasoning though, most of us probably won’t steer the car to a group of kindergarten kids on the sidewalk just to protect themselves.
but the car will have more and will be able to evaluate more information than a person can in such short time and the car will be able to react better.
the only thing that remains is, we have to tell him what to do. we have to tell him whose life has bigger value and whose life is worth protecting more. and that is where the trolley problem stop being academic exercise.
The car should be programmed to self-destruct or take out the passengers always. This is the only way it can counter its self-serving bias or conflict of interests. The bonus is that there are fewer deadly machines on the face of the planet and fewer people interested in collateral damage.
Teaching robots to do “collateral damage” would be an excellent path to the Terminator universe.
Make this upfront and clear for all users of these “robotaxis”.
Now the moral conflict becomes very clear: profit vs life. Choose.
interesting idea. do you think there is big market for such product? 😆
With a good enough regulation, there would be no market for such horrors.