From a technology perspective geeks and phreaks have been seeing opportunity everywhere, for that entire ten years.
But that's not the way to bet.
From ZD Net, October 1:
The combination of billions of IoT devices and 5G networks is set to drive a profound change in the way computing workloads are deployed.
In recent years, computing workloads have been migrating: first from on-premises data centres to the cloud and now, increasingly, from cloud data centres to 'edge' locations where they are nearer the source of the data being processed. The goal? To boost the performance and reliability of apps and services, and reduce cost of running them, by shortening the distance data has to travel, thereby mitigating bandwidth and latency issues.
That's not to say that on-premises or cloud centres are dead -- some data will always need to be stored and processed in centralised locations. But digital infrastructures are certainly changing. According to Gartner, for example, 80 percent of enterprises will have shut down their traditional data centre by 2025, versus 10 percent in 2018. Workload placement, which is driven by a variety of business needs, is the key driver of this infrastructure evolution, says the analyst firm:
With the recent increase in business-driven IT initiatives, often outside of the traditional IT budget, there has been a rapid growth in implementations of IoT solutions, edge compute environments and 'non-traditional' IT. There has also been an increased focus on customer experience with outward-facing applications, and on the direct impact of poor customer experience on corporate reputation. This outward focus is causing many organizations to rethink placement of certain applications based on network latency, customer population clusters and geopolitical limitations (for example, the EU's General Data Protection Regulation [GDPR] or regulatory restrictions).There are challenges involved in edge computing, of course -- notably centering around connectivity, which can be intermittent, or characterised by low bandwidth and/or high latency at the network edge. That poses a problem if large numbers of smart edge devices are running software -- machine learning apps, for example -- that needs to communicate with central cloud servers, or nodes in the intervening 'fog'. Solutions are on the way, however.
With edge computing sitting at the peak of Gartner's 2018 Hype Cycle for Cloud Computing, there's plenty of scope for false starts and disillusionment before standards and best practices are settled upon, and mainstream adoption can proceed. This introduction to ZDNet's special report looks to set the scene and assess the current state of play.
Definitions
Edge computing is a relatively new concept that has already been associated with another term, 'fog computing', which can lead to confusion among non-specialist observers. Here are some definitions that will hopefully clarify the situation.
Futurum Research
Unlike Cloud Computing, which depends on data centers and communication bandwidth to process and analyze data, Edge Computing keeps processing and analysis near the edge of a network, where the data was initially collected. Edge Computing (a category of Fog Computing that focuses on processing and analysis at the network node level)...should be viewed as a de facto element of Fog Computing.State of the Edge 2018
The delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services. By shortening the distance between devices and the cloud resources that serve them, and also reducing network hops, edge computing mitigates the latency and bandwidth constraints of today's Internet, ushering in new classes of applications. In practical terms, this means distributing new resources and software stacks along the path between today's centralized data centers and the increasingly large number of devices in the field, concentrated, in particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure and device sides.451 Research/OpenFog Consortium
[Fog] begins on one 'end' with edge devices (in this context, we define edge devices as those devices where sensor data originates, such as vehicles, manufacturing equipment and 'smart' medical devices) that have the requisite compute hardware, operating system, application software and connectivity to participate in the distributed analytics Fog. It extends from the edge to 'near edge' functions, such as local datacenters and other compute assets, multi-access-edge (MEC) capabilities within an enterprise or operator radio access network, intermediate computing and storage capabilities within hosting service providers, interconnects and colocation facilities, and ultimately to cloud service providers. These locations have integrated or host 'Fog nodes', which are devices capable of participating in the overall distributed analytics system.David Linthicum (Chief Cloud Strategy Officer at Deloitte Consulting)Here's how the OpenFog Consortium visualises the relationship between data-generating 'things' at the network edge, cloud data centres at the core, and the fog infrastructure in between:
"With edge, compute and storage systems reside at the edge as well, as close as possible to the component, device, application or human that produces the data being processed. The purpose is to remove processing latency, because the data needn't be sent from the edge of the network to a central processing system, then back to the edge...Fog computing, a term created by Cisco, also refers to extending computing to the edge of the network. Cisco introduced its fog computing in January 2014 as a way to bring cloud computing capabilities to the edge of the network...In essence, fog is the standard, and edge is the concept. Fog enables repeatable structure in the edge computing concept, so enterprises can push compute out of centralized systems or clouds for better and more scalable performance."
Image: OpenFog Consortium
Market estimates
According to B2B analysts MarketsandMarkets, the edge computing market will be worth $6.72 billion by 2022, up from an estimated $1.47bn in 2017 -- a CAGR (Compound Annual Growth Rate) of 35.4 percent. Key driving factors are the advent of the IoT and 5G networks, an increase in the number of 'intelligent' applications, and growing load on cloud infrastructure:...MUCH MORE
We are probably early on posting this, we were early on NVIDIA, at a boring, not doing much $25-30/share, a full year before it really got going on its run to $290 but in the meantime we were able to sound halfway intelligent at dinner parties and the Thursday afternoon salons. Fake it til you make it as they say in Silicon Valley. Here are some of our early NVIDIA posts. the top one is May 2016, after we'd been babbling about it for a year. At the time NVDA was changing hands at $38.31:
May 12
NVIDIA Sets New All Time High On Pretty Good Numbers, "Sweeping Artificial Intelligence Adoption" (NVDA)
April 13
CERN Will Be Using NVIDIA Graphics Processors to Accelerate Their Supercomputer (NVDA)
January 5
Class Act: Nvidia Will Be The Brains Of Your Autonomous Car (NVDA)
November 2015
Stanford and Nvidia Team Up For Next Generation Virtual Reality Headsets (NVDA)
November 2015
"NVIDIA: “Expensive and Worth It,” Says MKM Partners" (NVDA)
October 2015
Quants: "Two Glenmede Funds Rely on Models to Pick Winners, Avoid Losers" (NVDA)
May 2015
Nvidia Wants to Be the Brains Of Your Autonomous Car (NVID)