Wednesday, October 23, 2019

"Q&A with Steve Roddy, vice president of the machine learning group at Arm, on culling neural nets for embedded devices, competing with Nvidia and Intel, more"

Here's hoping that Softbank and Mr. Son are compelled by recent misadventures to re-float ARM.
From Dean Takahashi at VentureBeat:

How Arm wants to bring machine learning to ordinary computing devices
Arm may be a bit late to the whole machine learning and artificial intelligence bandwagon, at least with specialized designs for modern chips. But the designer of chip intellectual property has everybody beat in terms of volumes of AI and machine-learning chips deployed in the widest array of devices.

Arm’s customers, which include rivals Intel and Nvidia, are busy deploying AI technology everywhere. The company is also creating specific machine-learning instructions and other technology to make sure AI gets built into just about everything electronic, not just the high-end devices going into servers.

On the server level, customers such as Amazon are bringing ARM-based machine learning chips into datacenters. I talked with Steve Roddy, vice president of the machine learning group at Arm at the company’s recent TechCon event in San Jose, California.
Here’s an edited transcript of our interview.

VentureBeat: What is your focus on machine learning?

Steve Roddy: We have had a machine learning processor in the market for a year or so. We aimed at the premium consumer segment, which was the obvious first choice. What is Arm famous for? Cell phone processors. That’s where the notion of a dedicated NPU (Neural Processing Unit) first appeared, in high-end cell phones. Now you have Apple, Samsung, MediaTech, [and] Huawei all designing their own, Qualcomm, and so on. It’s commonplace in a $1,000 phone.
What we’re introducing is a series of processors to serve not only that market, but also mainstream and lower-end markets. What we originally envisioned — we entered the market to serve people building VR glasses, smartphones, places where you care more about performance than cost balancing and so on. History would suggest that the feature set shows up in the high-end cell phone, takes a couple years, and then moves down to the mainstream-ish $400-500 phone, and then a couple years later that winds up in the cheaper phone.

I think what’s most interesting about how fast the whole NPU machine learning thing is moving is that that is happening much faster, but for different reasons than — it used to be, okay, the 8 megapixel sensor starts here, and then when it’s cheap enough it goes here, and then when it’s even cheaper it goes there. It’s not just that the component cost goes down and integrates in and it’s replaced by something else. It’s that machine learning algorithms can be used to make different or smarter decisions about how systems are integrated and put together to add value in a different way, or subtract cost in a different way....

"This Tech CEO Says He's Ready to Take on Google, Facebook, Amazon, and Apple" 
"The Natural Evolution of Artificial Intelligence"

ARM Chips with Nvidia AI Could Change the Internet of Things
IoT: "SoftBank's ARM Spends Big to Meet Son's Connected World Dream"
In Case You Missed It: SoftBank Transferred its $5B Stake in NVIDIA Off Its Balance Sheet and Onto the Vision Fund (NVDA)
You Understand Why Mr. Son and SoftBank Are Circling Uber, Right?

ARM Wrestles Its Way Into Supercomputing (9984 Tokyo; INTC)
Sandia National Lab to Install First Petascale Supercomputer Powered by ARM Processors (Masayoshi Son smiles)