Liftr® Insights℠ Announces New Accelerator Classes and Brands for the Liftr Cloud Regions Map™


Liftr Insights has released two new features for its Liftr Cloud Regions Map

Liftr Cloud Regions Map explores regional cloud service provider availability.

“We have assembled highly insightful and accurate data-driven tools to address research reports, forecasting and recommendations across the cloud services and infrastructure landscape,” says Tab Schadt, Founder of Liftr Insights.

Now searchable are types and brands of IaaS and bare metal compute accelerators – GPUs, FPGAs, TPUs, and others as they deploy through the top four cloud providers’ public cloud infrastructure.

Backed by software automation, the Liftr Cloud Regions Map is updated every month, as cloud providers announce new data centers, cloud infrastructure, and component offerings across the globe, making technology investment decisions easier.

In addition, Liftr has addressed compliance obstacles making it easier for regulated financial services companies to consume our value-added data. More information on Liftr Insights’ compliance policy can be viewed in our FAQ.


  • Make Informed Business Investments


The Liftr Cloud Regions Map’s new features help investors make informed decisions about cloud service provider provisioning by locating accelerator classes and brands across the globe.

“We have assembled highly insightful and accurate data-driven tools to address research reports, forecasting and recommendations across the cloud services and infrastructure landscape,” says Tab Schadt, Founder of Liftr Insights.

“These tools provide a competitive advantage for assessing and implementing investment strategies concerning public cloud companies and related infrastructure components,” Schadt continued.

The new accelerator class addition shows the classes of hardware technology used to accelerate compute performance across processor-based and disaggregated instance types:

  • GPU (Graphics Processing Unit) is used to accelerate graphics and highly parallel matrix math calculations similar to those used in graphics. Fortunately, deep learning matrix math was a good match. NVIDIA dominates this category.
  • FPGA (Field Programmable Gate Array) can be programmed at a hardware level to optimize performance for specific algorithms that are not well suited to GPUs or general-purpose processors or to tune deep learning and machine learning algorithms that can take advantage of mixed reduced precision calculations. Intel and Xilinx are major suppliers of FPGAs into cloud deployments.
  • MOE (Math Offload Engine) is designed to accelerate broad classes of math calculations, much like a GPU but with the graphics bits stripped out. This class includes Google’s Cloud TPU instance types, which are very good at general forms of matrix math useful for deep learning and machine learning.
  • CNA (Custom Neural Accelerator) is designed for an optimal acceleration of specific classes of deep learning (DL) and artificial intelligence (AI) algorithms. This class will include AWS’s in-house Inferentia inferencing processor, which AWS has publicly stated it will deploy later this year.


The new accelerator brands feature shows the specific brands of accelerators offered by the tracked cloud service providers across the globe.

These new Liftr Cloud Regions Map accelerator features are derived from the Liftr Cloud Components Tracker™, which provides much more detailed information about accelerator instance types, configurations, and pricing.

This information is critical for cloud and semiconductor investors and their portfolios. For example, when a cloud provider announces a new region of operation, it’s pivotal that an investor has an up-to-date competitive picture of the chips and capabilities deployed into the new region.

Similarly, when a semiconductor vendor announces a new cloud chip or board-level product, investors can research which cloud provider has deployed it and if the deployment is broad or focused. That real-time knowledge is a strategic asset in determining cloud and chip vendor competitiveness.

  • Valuable Insights from the Liftr Cloud Components Tracker


The Liftr Cloud Components Tracker monthly report identifies configurations, pricing, and components of all Infrastructure as a Service and bare metal compute instances being deployed by tracked cloud service providers, region by region around the entire world. Reports are sent to customers within three weeks of the end of every month.

With the Liftr Cloud Components Tracker’s monthly delivery cadence, technology investors and cloud stakeholders are able to make more strategic and efficient business decisions rather than relying on obsolete data from survey-based research providers that are delivered almost two months after the end of each quarter.

Liftr Insights is able to do this with its modern DevOps-based data collection that researches cloud technologies, cloud services, cloud industry trends and internet-attached infrastructure. Liftr Insights delivers automated real-time insights, analysis and hard data about specific cloud service providers and the cloud industry overall to enterprise IT, the cloud ecosystem and investors.

Visit LiftrInsights.com or email Contact(at)LiftrInsights(dot)com for more details.

For sales inquiries, contact Sales(at)LiftrInsights(dot)com.

Liftr and the Liftr logo are registered service marks of Liftr Insights. The following are trademarks and/or service marks of Liftr Insights: Liftr Insights, Liftr Cloud Insights, Liftr Cloud Components Tracker, and Liftr Cloud Regions Map.

DoubleHorn is a registered service mark of DoubleHorn. The DoubleHorn logo is a service mark of DoubleHorn.

The following are registered intellectual property marks, trademarks, or service marks of their respective companies, along with related icons and logos:

Alibaba Group Holding Limited: “Alibaba“, “Aliyun”

AWS: “AWS”, “Amazon Web Services”

Microsoft: “Azure”

Google: “Cloud TPU”, “Google Cloud”, “Google Cloud Platform”, “GCP”

NVIDIA: “NVIDIA”

Share article on social media or email:



Source link

Leave a Reply