From the New York Post:
Bloomberg Editor-in-Chief John Micklethwait told his 2,400
journalists in a memo on Tuesday that he was forming a 10-person team to
lead a study on how to use more automation in writing and reporting.
Micklethwait called the robot-generated copy “smart automated content (SAC).”
A company spokesman insisted no journalists will be sacked as a result of the SAC.
–– ADVERTISEMENT ––
“Why do we need you, if the basic idea is to get computers to do more
of the work?” Micklethwait asked in the memo, obviously addressing an
unspoken concern among his staff.
“One irony of automation is that it is only as good as humans make
it. That applies to both the main types of automated journalism. In the
first, the computer will generate the story or headline by itself. But
it needs humans to tell it what to look for, where to look for it and to
guarantee its independence and transparency to our readers. In the
second sort, the computer spots a trend, delivers a portion of a story
to you and in essence asks the question: Do you want to add or subtract
something to this and then publish it? And it will only count as
Bloomberg journalism if you sign off on it.”...MORE
And from MIT's Technology Review, computers captioning pictures:
When social-media users upload photographs and caption them, they don’t just label their contents. They tell a story, which gives the photos context and additional emotional meaning.
A paper published by Microsoft Research describes an image captioning system that mimics humans’ unique style of visual storytelling. Companies like Microsoft, Google, and Facebook have spent years teaching computers to label the contents of images, but this new research takes it a step further by teaching a neural-network-based system to infer a story from several images. Someday it could be used to automatically generate descriptions for sets of images, or to bring humanlike language to other applications for artificial intelligence.
“Rather than giving bland or vanilla descriptions of what’s happening in the images, we put those into a larger narrative context,” says Frank Ferraro, a Johns Hopkins University PhD student who coauthored the paper. “You can start making likely inferences of what might be happening.”
Consider an album of pictures depicting a group of friends celebrating a birthday at a bar. Some of the early pictures show people ordering beer and drinking it, while a later photo shows someone asleep on a couch.
“A captioning system might just say, ‘A person lying on a couch,’” Ferraro says. “But a storytelling system might be able to say, ‘Well, given that I think these people were out partying or out eating and drinking, then this person may be drunk.’”...MORE