Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, May 14, 2009

War Machines, Agents and Asimov's Laws

A cursory scan over the table of contents of P. W. Singer's book, Wired for War, reads like a strange cross between the scripts for Will Smith's movies Enemy of the State and I, Robot. The book should be good reading; a potent reminder of technological progress over the last 40 years, and their ethical implications.

My first experience with unmanned aerial vehicles (UAV) was during US Army Officer's Basic Course. It was 1998, and I was stationed 14 miles from the Mexican border at Ft. Huachuca, Arizona. Ft. Huachuca is the "Home of Intelligence", and was the testing grounds for early UAV models. They were poorly designed back then, often crashing into the mountain side during our morning runs. The whirling sound of remote control airplane, a loud thud, a plume of smoke, and everyone laughed. The running gag was that we were being bombed by Mexican Kamikaze pilots.

Fastforward 4 years. I was an intelligence officer working on a special team supporting mostly operational command and special forces in Afghanistan. Among the operational read-ons: a curious, almost overlooked report of the latest in drone tech - a Predator outfitted with Hellfire missiles - attacked a target in theater with lethal accuracy and without ever being seen. Joke time was over. No one was laughing anymore.

The success of UAV once again called into question the ethical responsibilities of building smarter war machines. Ever since Isaac Asimov gave us his famous Three Laws in the original I, Robot, we seem to be addicted to the idea of imposing morality on machines. And then comes the logical counter-argument that technology is neutral, taking their direction from humans - moral or otherwise. Guns don't kill people; drug dealers, gang bangers and jealous, redneck boyfriends do. And even as legions of computers over the Internet assimilate into botnets,
questions arise that can't be easily answered. We want somebody to protect us by stopping the evil-doers from having technology, but who gets to make those decisions? And who gets to decide who gets to decide?

In my opinion, it comes back to the vision society gives us - the creators. Don't like the applications you see? Dream a better dream. As I sit in my favorite coffeeshop writing this post, I imagine a world where my netbook or mobile phone is home for an intelligent agent that serves me well. While I type, my avatar is bidding on a couple eBay items, searching the Internet for new email addresses of old friends, introducing itself to new friends around me, and checking in on my car around the corner via satellite link to find out if it's been stolen. I imagine a world where our elderly no longer have to leave their homes and their dignity behind for the security of retirement homes. The house of the future no longer facilitates home care; it becomes the care-giver.

It's time to take the next step in challenging the boundaries of science... by challenging the boundaries of our own imagination. Dream better dreams.