In "this ends badly" news, here's the latest way scientists are hastening the impending, James Cameron-foreseen robot apocalypse …
Researchers at MIT's Computer Science and Artificial Intelligence Lab enabled an artificial intelligence system to read the instruction manual for the simulation game 'Civilization' so that it could play the game better. Aaaaaaaand now it does (with a jump in win-rate from 46% to 79%) …
The extraordinary thing about Barzilay and Branavan's system is that it begins with virtually no prior knowledge about the task it's intended to perform or the language in which the instructions are written. It has a list of actions it can take, like right-clicks or left-clicks, or moving the cursor; it has access to the information displayed on-screen; and it has some way of gauging its success, like whether the software has been installed or whether it wins the game. But it doesn't know what actions correspond to what words in the instruction set, and it doesn't know what the objects in the game world represent.
Don’t like ads? Become a supporter and enjoy The Good Men Project ad freeSo initially, its behavior is almost totally random. But as it takes various actions, different words appear on screen, and it can look for instances of those words in the instruction set. It can also search the surrounding text for associated words, and develop hypotheses about what actions those words correspond to. Hypotheses that consistently lead to good results are given greater credence, while those that consistently lead to bad results are discarded.
We get it, guys. You want better machines. Faster, more helpful, able to … predict and satisfy whatever base desires you may have. The scientific road is a lonely one. Totally understandable. You'll be dead before any of it matters and our silicon-minded overlords crush us under their treads. We understand. It's okay. See you in hell.
[Source: Geekologie]