ArtRichards 3 days ago | next |

Reminds me of this: ‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...

fuzzfactor 3 days ago | root | parent | prev |

>‘The machine did it coldly’

I'm not so sure, with the technology they're "shooting for".

I thought this said it all:

>They can differentiate between friendly, civilian and enemy, decide to engage or alert based on target type, and even vary their effects.

Obviouly they're only going to be designing, building, and deploying "nice bombs".

JCharante 4 days ago | prev | next |

Did HN automatically strip "Prediction -" from the title? At a glance it makes it look like the author is working on making it happen

simple10 3 days ago | prev | next |

For military drones, yes, this will certainly happen if it hasn't already.

I'd love to see some predictions on manufacturing robots intentionally killing someone for the greater good in a sort of Trolley Problem [1]. The theoretical potential of AI safety protocols getting misaligned and a robot deciding to sacrifice a human worker to save multiple lives.

[1] https://en.wikipedia.org/wiki/Trolley_problem

fhfjfk 2 days ago | root | parent | prev |

That'll happen with self driving cars. It'll be interesting to see whether pedestrian or occupants are considered more valuable.

yawpitch 4 days ago | prev | next |

Uh, pretty sure it’s already happened, if by “robot” you mean programmed (but not necessarily independently mobile) automaton / machine, by “autonomous” you mean not under the direct and realtime control of a human operator, and by “deliberately” you mean ML came up with a > 0.5 certainty that the target met its targeting criteria.

Doubt a robot will ever actually deliberate, but that’s more of a philosophical issue.

hollerith 3 days ago | root | parent |

I took the title to mean that human commanders will deliberately choose to deploy an autonomous robot configured to kill a person or persons (as opposed to a robot's killing a person, but the death was unwanted and unforeseen by the commanders).

yawpitch 3 days ago | root | parent |

Yeah, my point is that’s kinda already been done, if we include modern “smart” landmines and loitering munitions. Honestly, I think this threshold is arguably behind us, but I’d also say if it isn’t it will be in a lot less than a decade.

potato3732842 3 days ago | root | parent |

My money is on automated AA systems near some inland border where commercial flights just so happen to never go nabbing a crop duster, surveying aircraft or SAR/med chopper because some comedy of errors resulted in the device getting put on way too hair trigger of a setting.

tim333 3 days ago | root | parent |

Human operated AA systems regularly shoot down the wrong thing so unless the automated systems are much better than us it seems pretty certain.

fuzzfactor 3 days ago | prev | next |

You'd know exactly how it was going to happen if you could review every line of code, every comment, and every bit (byte) of data involved, and make sure it was meaningful.

So you could precisely pinpoint the exact data path that would carry out such a deed, and how it got that way. And be able to follow the trail of bits throughout the entire chain-of-command and arrive at the root cause quite logically.

Oh wait a minute . . . I was thinking about an accidental killing, my bad.

For a deliberate killing you don't need any of that.

hcfman 3 days ago | prev | next |

Which is worse? Killing one human automatically by a computer. Or dropping 2000 lb bombs on civilian areas by human decisions deliberately?

Just a question.