A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.And in other news:
"The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition."
The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus' laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner's structure-mapping theory. (Gentner received the 2016 David E. Rumelhart Prize for her work on this theory.)
Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern's McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.
The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition....MORE
AI software is figuring out how to best humans at designing new AI softwareat TechCrunch.
As noted in the intro to 2013's "Researcher Dreams Up Machines That Learn Without Humans":
Uh oh.