In its recently released "Unmanned Aircraft Systems Flight Plan 2009-2047" report, the US Air Force details a drone that could fly over a target and then make the decision whether or not to launch an attack, all without human intervention. The Air Force says that increasingly, humans will monitor situations, rather than be deciders or participants, and that "advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input." Programming of the drone will be based on "human intent," with real actual humans monitoring the execution, while retaining the authority and ability to override the system... the Air Force plans to have these dudes operational by 2047...I think I've heard of this before. War machines that function on their own, based on computer intelligence. Humans would be able to override, unless something went wrong. And the report does say we can trust the machines just as we trust people...
Such unmanned aircraft must achieve a level of trust approaching that of humans charged with executing missions, the Air Force stated.I guess that would be o.k. We obviously need things like this to kill people who don't agree with us. Government scientists and politicians are in charge. What could possibly go wrong?
Photo credit Engadget. More reassuring details at PC World.
This just gives me the willies. Next thing you know your Roomba will be telling you to change your socks or it will kill you.
ReplyDelete