Here's the introduction to March 19's "Facebook's Regulatory Risk Is Real and It Is Magnificent (FB)":
So far this year we have had over thirty posts on TheFacebook, as it was once known, with almost all of them being critical of the company in one way or another.And the result:
The risks highlighted can be broken down into four broad areas:
1) The media backlash to having their business models undermined by Facebook and Google.Our motivation is pretty straightforward: How can we make a buck or two off what appears to be going on?
2) The surveillance capabilities of the platform companies and the privacy/security risks they pose.
3) The deliberate addiction of users brains via neurotransmitter manipulation and psychological engineering.
4) The use of the above characteristics by political operators to achieve their own ends and the various outrages, faux and otherwise, elicited.
Recognizing risk before the computers do is one approach. The dirty little secret of machine learning is that the computers can very quickly categorize what they are witnessing only if they have seen the situation previously. We used one of the funnier examples in the outro from December 2017's:
Artificial Intelligence in Risk Management: Looking for Risk in All the Wrong Places
Opportunity is where you find it, turn your risk manager into a profit center. ...
From naked capitalism, November 15:...
...AI cannot cope well with uncertainty because it is not possible to train an AI engine against unknown data. The machine is really good at processing information about things it has seen. It can handle counterfactuals when these arise in systems with clearly stated rules, like with Google’s AlphaGo Zero (Silver et al. 2017). It cannot reason about the future when it involves outcomes it has not seen....MOREe.g.
"When Google was training its self-driving car on the streets of Mountain View, California, the car rounded a corner and encountered a woman in a wheelchair, waving a broom, chasing a duck. The car hadn’t encountered this before so it stopped and waited."This introduction is getting a bit wordy so we'll chop things into a couple more posts tomorrow but that's the premise, look for risks that AI hasn't yet been trained on in an attempt to gain some asymmetric advantage.
If, in the meantime you can convince yourself you are performing some sort of societal good, all the better. Speaking of the meantime here's some good insight until we get around to parts II and maybe III.
Too much stress, not enough compensation for being right on the thesis and wrong on the timing.
Our general rule: "Only short frauds in a bull market"