Why not? If a robot can be programmed to act autonomously, it could ultimately act against the will of its creator. If robots can commit crimes and sins, can they also serve and honor? Robots can’t sin, you say. How do we learn to anticipate the ethical challenges ahead of us? At what point does our technology increase our capacity for production and action beyond our ability to make good moral decisions? At what point does our technology make moral decisions for us?īut the flip side of this is also true. People of faith are facing increasingly difficult questions about technology and its affect on our ability to serve God in our work. Military robots are just the trigger for a deeper conversation that needs to happen. Even so, I’m sure these support robots are part of the reason for the discussion happening at the United Nations. People would still be the lethal “teeth” of the army, but they would be increasingly supported by robots. These would be support robots-bomb defusers, scouts, maybe even drivers. Army leaders are not talking about robots with guns. What we’re trying to do now is make sure, over the next several years, we can keep our budget up so we can help ourselves down in 2025.Īvatars? I’m not making this up. If we downsize a brigade, how can we keep the same types of brigades out there but be smaller? With technology, how can we do that? Robotics, how can that help us?ĭo we need a nine-person vehicle, or can we go to six-person? Do we use avatars? John Campbell, Vice Chief of Staff for the U. General Robert Cone suggested the army could scale the average size of a brigade from 4000 to 3000 soldiers, with robots picking up the slack. Sound like science fiction? Earlier this year, U. A lethal autonomous weapons systems is pretty much a terminator robot without the ability to travel back in time and kill your mother before your born. The UN is talking about robots with guns. ![]() ![]() Just in case you missed what is really going on here, let me say it again. What is the relationship between humans and robots?.What are the levels of autonomy and predictability in robotics.Are lethal autonomous weapons systems socially acceptable.How does the the development of lethal autonomous weapons systems impact humans?.So this week, the UN Convention on Certain Conventional Weapons (CCW) is meeting to discuss the pros and cons of lethal autonomous weapons systems.Ĭonsider these questions from the actual agenda for the meeting: How far do we have to remove people from a fatal technology before technology itself becomes the killer? More traditionally, do guns kill people or do people kill people? That question is politically loaded, but it is a valid question. More importantly, where do we draw the line on holding creators responsible for the crimes committed with their technology? Do Robots Kill People or Do Programmers Kill People? Who do we really hold accountable for these imagined crimes, though? Is Hal 9000 just a bad program? Is the creator of Skynet theoretically responsible for the murders committed by Skynet? Do Daleks even qualify as robots? And Daleks should stop trying to “Exterminate! Exterminate!” Skynet should not have destroyed the world with Terminator robots (and nuclear bombs). Hal 9000 should have opened the pod bay doors. The UN surely knows the fictional sins of robots. ![]() ![]() Science Fiction is always quick to condemn robots for their sins, and now science fiction is turning into international policy. This week, the United Nations is talking about robots with guns. NASA unveiled Valkyrie, its robot “superhero,” in Dec.
0 Comments
Leave a Reply. |