Sci-fi level Artificial Intelligence (AI) like HAL 9000 was promised since 1960s, but PCs and robots were dumb until recently. Now, tech giants and startups are announcing the AI revolution: self-driving cars, robo doctors, robo investors, etc. PwC just said that AI will contribute $15.7 trillion to the world economy by 2030. "AI" it's the 2017 buzzword, like "dot com" it was in 1999, and everyone claims to be into AI. Don't be confused by the AI hype. Is this a bubble or real? What's new from older AI flops?
AI is not easy or fast to apply. The most exciting AI examples come from universities or the tech giants. Self-appointed AI experts who promise to revolutionize any company with the latest AI in short time are doing AI misinformation, some just rebranding old tech as AI. Everyone is already using the latest AI through Google, Microsoft, Amazon etc. services. But "deep learning" will not soon be mastered by the majority of businesses for custom in-house projects. Most have insufficient relevant digital data, not enough to train an AI reliably. As a result, AI will not kill all jobs, especially because it will require humans to train and test each AI.
AI now can "see", and master vision jobs, like identify cancer or other diseases from medical images, statistically better than human radiologists, ophthalmologists, dermatologists, etc. And drive cars, read lips, etc. AI can paint in any style learnt from samples (for example, Picasso or yours), and apply the style to photos. And the inverse: guess a realistic photo from a painting, hallucinating the missing details. AIs looking at screenshots of web pages or apps, can write code producing similar pages or apps.
(Style transfer: learn from a photo, apply to another. Credits: Andrej Karpathy)
AI now can "hear", not only to understand your voice: it can compose music in style of the Beatles or yours, imitate the voice of any person it hears for a while, and so on. The average person can't say what painting or music is composed by humans or machines, or what voices are spoken by the human or AI impersonator...MUCH MORE
AI trained to win at poker games learned to bluff, handling missing and potentially fake, misleading information. Bots trained to negotiate and find compromises, learned to deceive too, guessing when you're not telling them the truth, and lying as needed. A Google translate AI trained on Japanese⇄English and Korean⇄English examples only, translated Korean⇄Japanese too, a language pair it was not trained on. It seems it built an intermediate language on its own, representing any sentence regardless of language.
Machine learning (ML), a subset of AI, make machines learn from experience, from examples of the real world: the more the data, the more it learns. A machine is said to learn from experience with respect to a task, if its performance at doing the task improves with experience. Most AIs are still made of fixed rules, and do not learn. I will use "ML" to refer "AI that learns from data" from now on, to underline the difference.
Artificial Neural Networks (ANN) is only one approach to ML, others (not ANN) include decision trees, support vector machines, etc. Deep learning is an ANN with many levels of abstraction. Despite the "deep" hype, many ML methods are "shallow". Winning MLs are often a mix, an ensemble of methods, like trees + deep learning + other, independently trained and then combined together. Each method might make different errors, so averaging their results it can win, at times, over single methods.
The old AI was not learning. It was rules-based, several "if this then that" written by humans: this can be AI since it solves problems, but not ML since it does not learn from data. Most of current AI and automation systems still are rule-based code. ML is known since the 1960s, but like the human brain, it needs billions of computations over lots of data. To train an ML in 1980s PCs it required months, and digital data was rare. Handcrafted rule-based code was solving most problems fast, so ML was forgotten. But with today's hardware (NVIDIA GPUs, Google TPUs etc.) you can train an ML in minutes, optimal parameters are known, and more digital data is available. Then, after 2010 one AI field after another (vision, speech, language translation, game playing etc.) it was mastered by MLs, winning over rules-based AIs, and often over humans too.
Why AI beat humans in Chess in 1997, but only in 2016 in Go: for problems that can be mastered by humans as a limited, well defined rule-set, for example, beat Kasparov (then world champion) at chess, it's enough (and best) to write a rule-based code the old way. The possible next dozen moves in Chess (8 x 8 grid with limits) are just billions: in 1997, computers simply became fast enough to explore the outcome of enough of the possible move series to beat humans. But in Go (19 x 19 grid, free) there are more moves than atoms in the universe: no machine can try them all in billion years. It's like trying all random letter combinations to get this article as result, or trying random paint strokes until getting a Picasso: it will never happen. The only known hope is to train an ML on the task. But ML is approximate, not exact, to be used only for intuitive tasks you can't reduce to "if this then that" deterministic logic in reasonably few loops. ML is "stochastic": for patterns you can analyse statistically, but you can't predict precisely.
ML automates automation, as long as you prepared correctly the data to train from. That's unlike manual automation where humans come up with rules to automate a task, a lot of "if this then that" describing, for example, what e-mail is likely to be spam or not, or if a medical photo represents a cancer or not. In ML instead we only feed data samples of the problem to solve: lots (thousands or more) of spam and no spam emails, cancer and no cancer photos etc., all first sorted, polished, and labeled by humans. The ML then figures out (learns) the rules by itself, magically, but it does not explains these rules. You show a photo of a cat, the ML says this is a cat, but no indication why....
Of course you don't "have to" have NVIDIA's handy little $129,000 ML-in-a-box computer for your style transfer work..
If you're a genius photographer.
"Auto Mechanics In the Style of Michelangelo--UPDATED" (also Rembrandt)
...A bit of Rembrandt inspired garageiness:
...MORE
And from the artist's atelier (ok, website), Still Life 1: