- cross-posted to:
- technology@kbin.social
- cross-posted to:
- technology@kbin.social
My favorite part of this story:
“The rocket terminated the flight after judging that the achievement of its mission would be difficult.”
“Man, this is too hard, better explode!”
It’s going to be a combination of red flags that an algorithm weighs, and triggers the self destruct if exceeded. Probably even gives HQ a short window to override it (if coms are working).
It’s not going to have a built in “AI” making “intelligent” decisions in a dynamic way. That would be extremely dangerous/unreliable, as well as require a shit ton of processing power.
Stop buying into the AI bullshit. Algorithms != AI
It’s not buying into AI bullshit to infer some processing and assessment from something said to have decided something. Decisions involve consideration, they’re not like instincts.
It seems like the person saying that misspoke.
They didn’t misspeak, they anthropomorphised. People do that all the time, and calling it an error is pedantic to the point of being incorrect.
Also, that statement was probably in Japanese. You can’t read that kind of implication from it, even if it would have been correct to do so in English (which it wouldn’t)
That’s misleadingly inaccurate if it wasn’t misspeaking, calling it a mistake was charitable (though the issue could definitely rest in translation, you’re right).