Monday, March 16, 2026

"Musk’s xAI Hiring Credit Experts, Bankers to Teach Grok Finance"

From Bloomberg, March 16:

Elon Musk’s artificial intelligence startup xAI is looking to hire bankers and private credit lenders to make its Grok chatbot better at finance strategy, joining rival AI firms in pushing software for investing professionals.

xAI is actively recruiting Wall Street bankers, portfolio managers, traders and credit analysts for its data annotation teams that train Grok, according to a series of job postings on its website. These experts are expected to teach the AI system to think through financial modeling, including leveraged loan syndication, distressed investing and niche bonds such as mortgage-backed securities and collateralized loan obligations.

The company is also hiring financial experts in the crypto and equity markets, the postings show.

The top AI developers have increasingly focused on convincing more business professionals to pay up for their software, with multiple startups specifically eyeing the financial sector. OpenAI and Anthropic PBC have released tools meant to streamline market analysis, investment memos and other work. Those moves have spooked investors in legacy software providers that some fear may be rendered obsolete.

xAI, which merged with Musk’s SpaceX last month, is generally viewed as lagging competitors in signing up business customers. To date, much of xAI’s revenue has come from deals with Musk’s other ventures, including Tesla Inc. and SpaceX.

Musk’s AI company is rebuilding its business strategy after a turbulent start to the year, during which it lost many staffers, including much of its founding team, and faced a global uproar over Grok generating non-consensual explicit images.

Last week, Musk hired two senior employees from Cursor, a leading AI coding startup that is in fundraising discussions at a $50 billion valuation. Musk admitted at a recent conference that xAI is behind on coding, a feature that has been a key revenue driver for OpenAI and Anthropic....

....MORE 

Possibly related:
2021/2023"How to poison the data that Big Tech uses to surveil you" (GOOG; FB; AMZN; MSFT; TWTR)
We've been posting on machine learning and AI for a decade and strolling through the archives might allow us to avoid reinventing the wheel. Plus there is some wickedly fun stuff we've collected over the years.

Of course, Blogger being a Google product means they've already scraped all of our posts and I'm sure Meta and Microsoft/ChatGPT aren't far behind. Pity we didn't poison the data-well a bit more....

And:

2018 
....The Pathological and the Perturbed
The other category of adversarial machine learning attacks are known as "evasion.” This strategy targets systems that have already been trained. Rather than trying to corrupt training data, it tries to generate pathological inputs that confuse the model, causing it to generate incorrect results.
The spam filter attack, where you trick an algorithm into seeing spam as ham, is an example of evasion. Another is "Hyperface," a collaboration between Hyphen Labs and Adam Harvey, a specially designed scarf engineered to fool facial recognition systems by exploiting the heuristics these systems use to identify faces. Similarly, in a recent study, researchers developed a pair of glasses that consistently cause a state-of-the-art facial recognition system to misclassify faces it would otherwise identify with absolute certainty....