Posted by twayneking on Wednesday, February 18, 2015
I've got myself involved with a bunch of Brits, Germans, French and Irish computer programmers who have developed this computer device that houses an artificial intelligence with an Emotion Chip. Yes, an emotion chip - like Data keeps unsuccessfully experimenting with in the Star Trek The Next Generation series.
In the movies, some scientist just solders together some bits of wire and silicon and voila! He has a tiny bit of technology that just slips into a specially prepared slot on his friendly neighborhood robot and pretty soon they are laughing and telling jokes and falling in love (especially the ones that are "fully functional").
What they don't show you are the rooms full of bleary eyed computer coding monkeys and the semi-unemployed former English teachers/freelance commercial writers. We're the ones who have to write the tens of thousands of lines of dialogue and millions of lines of computer code that make this "emotion chip" actually appear to react to human emotion. It's a huge job. And, I admit it, kind of fun!
The sheer volume of dialogue we have to write is intimidating and every line of it needs to be run through a simulator that reads your script dialogue using the computer voice. I inevitably have to repunctuate and respell everything so that it sounds relatively human because of the limitations of machine voices. For instance, the computer reads "Facebook" as "Fessbuke". I have to spell it "Fayce book" to get it to say "Facebook" like a human. In addition, it turns out that I'm writing dialogue and determining conversational sequences and the coders are reproducing my conversational sequences in computer code (Heaven help us, they're following my lead?).
The computer programmers are all atwitter about this thing as though it were the greatest thing since the wireless mouse. In the crowd-funding promotional video they naively call their A.I. cube "HAL" when they speak to it. To be fair most of these guys are too young to remember 2001 a Space Odyssey and those who have actually taken a peek at the movie somehow missed it that the emotion detecting artificial intelligence KILLED EVERYBODY ON THE SHIP EXCEPT DAVE AND IT ONLY MISSED HIM BECAUSE DAVE MANAGED TO MAKE A 30 SECOND SPACEWALK WITHOUT A HELMET! I'm not sure how they missed that. My fear as that the coders might have thought this might be a lively new feature for the A.I. - the excitement of knowing your A.I. might murder you in your bed. Some people need to get out of the computer room and do some base jumping or alligator wrestling. Sheesh!
Anyway, when I joined up, these guys were well on the way to making a monumentally creepy device that controls your house, picks out your music for you and tracks your Facebook Friends and decides which ones you should pay attention to (and which ones you should not). They are looking for other social media sites from which to draw information about its users. I'm not telling them about the Banjohangout. If that thing took a look at this bunch, it might turn up it's owner's gas stove and blow out the pilot light.
There are some things one's A.I. buddy just should not know about one, know-whut-I-mean?
Once we get busy and the project director isn't paying attention anymore, I'm thinking of pulling Mike Gregory quotes off the forums and picking up some of the more colorful lines from our discussions on global warming and progressive socialism to slip into the A.I.'s repertoire.
Hey, maybe I'll use the opening bars of "Dueling Banjos" as a warning signal when the conversation between the A.I. and the little pervert who has "bonded" with it gets too creepy. I told the boss I was more than a little worried about the A.I. getting weird if it got itself bonded to some serial killer, terrorist or sado-masochist. He assures me that their version of the Three Laws of Robotics will prevent that. I didn't have the heart to tell him that Asimov's 3 Laws allowed the robots in the book to extrapolate a fourth law that convinced them they should manipulate millenia of human history for "our own good". This was in the novels and we are meant to be sympathetic with their good intentions, however, Asimov may have inadvertantly exposed the hazards of allowing smart people (or robots for that matter) too much power and control over our lives. How much fun will that be if the artificial intelligences of the future decide we need to me managed for our own comfort and safety? Worse yet, what if we go along with it all because it's just easier to be herded into the feedlot than to resist.
(Insert Twilight Zone music).
Tom King © 2015
Thursday, November 2, 2017 @2:29:20 AM
You must sign into your myHangout account before you can post comments.
'Stelling Warranty' 2 hrs