Intel Developer Cloud Review - Cloud with AI & FPGA Superpowers and Tons of Empty Pages

Nowadays the clouds are very popular as an easy & fast way to make money for big companies. Customers don't need to play with Linux shell from the start, because good cloud provides heavy lifting - want a database, load balancer or monitoring? No problem, go get all of it in few clicks.

Intel Developer Cloud Review - Cloud with AI & FPGA Superpowers and Tons of Empty Pages
Photo by Brecht Corbeel / Unsplash

Nowadays the clouds are very popular as an easy & fast way to make money for big companies. Customers don't need to play with Linux shell from the start, because good cloud provides heavy lifting - want a database, load balancer or monitoring? No problem, go get all of it in few clicks. Funny enough, this lifestyle can bring another problem - too many cloud tools become critical to the service and they are mostly proprietary, so possible migration to another cloud can be painful. This is called vendor locking.

Let's dig deeper and find out what powerful tools are available in Intel Devcloud.

First look

To sign up, a user needs an email and phone number to verify - standand practice to protect a service from army of bots. For access to powerful virtual machines or GPU adding a credit card is required. The solid difference - you have small cloud resources to play with and decide is this cloud good for you or not. Some funny cloud services (I'm looking on you, Oracle!) want a credit card first and then can reject their user if this card is not good enough!

To be honest, the interface looks like the early Digitalocen web GUI - minimalistic style and no huge list of tools on the left and right. And no 55 seconds to start the VM! Also, some tools are hosted on the intel.com website, so I literally lost myself there a couple of times and had to type devcloud.intel.com on the browser address bar.

Gears - available hardware

  • Intel Xeon Scalable processors: we saw mostly Xeon Gold in action
  • Intel i7/i9 11th, 12th, 13th generations
  • Intel® Data Center GPUs: Arc, Flex and Max series
  • Intel FPGAs - Stratix, Arria and Max series

Virtual Machines

Intel Devcloud gives you the ability to run your services inside a VM. Also, you can attach a GPU to a virtual machine, which is super helpful for AI, 3D rendering, production video converting (especially in open source AV1 format).

Surprisingly, it's easy to lost on the website. This is probably the easiest way to access VMs: devcloud.intel.com -> top bar -> Intel Developer Cloud -> Sign In. Unfortunately, this is popular problem - simple interface and hard to find the right part of website. Search isn't limited to Devcloud, so you probably will get a gigaton of absolutely unrelated stuff. So, bookmarks for the win.

The Operations System is latest Ubuntu LTS - currently 22.04. No custom OS or images is supported at the moment, which probably means the users should run Docker, Podman or LXC container systems. The Bare Metal access currently on request only.

Pricing

Virtual Machines Pricing

  • Beta - Intel® Trust Domain Extensions (Intel® TDX) with 4th Generation Intel® Xeon® Scalable processors - 32GB RAM / 2TB Storage - $18/hour
  • Intel® Xeon® processors, codenamed Emerald Rapids - 32GB RAM - $14/hour
  • 4th Generation Intel® Xeon® Scalable processors - 32GB RAM / 2 TB storage - $3.62/hour
  • Intel® Data Center GPU Max 1550 (four GPUs) with 4th Generation Intel® Xeon® Scalable processors - 32GB RAM / 2 TB storage - $18/hour
  • Intel® Xeon® processors, codenamed Sapphire Rapids with high bandwidth memory (HBM) HBM-Only mode - 32GB RAM / 2 TB storage - $14/hour
  • Intel® Xeon® processors, codenamed Sapphire Rapids with high bandwidth memory (HBM) – Flat mode - 32GB RAM / 2 TB storage - $14/hour
  • Intel® Xeon® processors, codenamed Sapphire Rapids with high bandwidth memory (HBM) – Cache mode - 32GB RAM / 2 TB storage - $14/hour
  • Intel® Data Center GPU Flex 170 (single GPU) with 4th Generation Intel® Xeon® Scalable processors - 32GB RAM / 2 TB storage - $15/hour
  • Intel® Data Center GPU Max 1100 with 4th Generation Intel® Xeon® Scalable processors - 32GB RAM / 2 TB storage - $4.21/hour
  • Habana Gaudi2 Deep Learning Server featuring eight Gaudi2 HL-225H mezzanine cards and latest Intel® Xeon® Processors - 96GB RAM / 30 TB storage - $10.42/hour
  • Intel® Data Center GPU Flex 170 (three GPUs) with 3rd Generation Intel® Xeon® Scalable Processors - 32GB RAM / 2 TB storage - $13.00/hour

Pricing is high, being honest. Two big difference from typical cloud hosting: pay per hour and good storage in all tiers. Habana Gaudi 2 machine deal seems very good - 96GB RAM and 30TB (!) storage for $10 per hour.

AI

Artificial Intelligence is a big player in the modern market and will be much bigger in few years. ChatGPT, Stable Diffusion, Midjourney are top hits with millions of dollars in capitalization and more to come. Intel in a unique position here: as a graphics chip manufacturer and a Cloud GPU products seller - double win. You don't need to invest in costly hardware to innovate and train your AI models. Of course, personal hardware with full controll rocks, but starting with several thousand dollars isn't always the best way. The Developer Sandbox on production hardware will help to better understand all the risks and potential issues.

Devcloud has several hot topics with pre-built solutions:

From software side we should talk about OpenVINO™ toolkit for Generative AI and Intel TensorFlow Extension for deep learning.

How to get started

First we need to go to devcloud.intel.com, sign up and get approved. Then is will be possible to select CPU and GPU, upload our models and data and test it all online. All this is documented, plus tons of examples are available. Most of them are available in Jupiter Notebook format, some of them equipped with SSH access: download a Bash script from devcloud website, make it executable and voila! I'm personally like that it's simple Bash, so there's no need to install PowerShell on Linux, no 3rd-party tools or even install Windows - bare metal or even VM. Intel Devcloud is Linux friendly - verified.

Edge

Devcloud provides infrastructure for Edge solutions for small and large solutions with multiple types of hardware. The service is maintained by Intel Corporation and Colfax International, security assessments provided by Bishop Fox. Telemetry is automated but users should enable it manually in the Jupyter Notebook UI. Also interesting note: in JupyterLab and Container Playground there's no root access, so installing additional software can be a bit complicated.

Available storage:

  • JupyterLab environment: up to 50 GB.
  • Container Playground: up to 5 GB per container.

Memory:

  • JupyterLab: up to 2 GB RAM per node.
  • Container Playground: up to 4 GB RAM per node.

Jupyter Notebook launcher

The Edge Jupyter Notebook launcher look as it is:

Egde Jupyter Launcher

There are ready notebooks for OpenVINO, XPython and YOLO v8, available in one click. To create new Notebook, click File -> New -> Notebook. Now you can copy play with code from examples or create your own.

Let's check available resources:

!lscpu
  Model name:            Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz
    CPU family:          6
    Model:               85
    Thread(s) per core:  2
    Core(s) per socket:  20


!lsmem
    Memory block size:         2G
    Total online memory:     256G
    Total offline memory:      0B

!free -m
    Mem:        256559      3617    126192         2    126749    251183

20 cores and 128 GB RAM is very good for testing and prototyping - enjoy your stay!

Hardware

Devcloud allows to test users workload on several combinations of CPU and GPU, on Bare Metal and containers:

  • Intel Core i7-1370PE & Intel® Iris® Xe Graphics
  • Intel N97 & Intel® UHD Graphics
  • Intel Atom® x7425E & Intel UHD Graphics
  • Intel Core i3-N305 & Intel UHD Graphics
  • Intel Core i7-13700 & Intel® Arc™ A770 Graphics (8GB)
  • Intel Core i3-12300HL & Intel Iris Xe Graphics
  • Intel Core i7-12800HL & Intel Iris Xe Graphics

So it's cool to have full picture - how the software works on low-end Atom x7425 or N97 and top i7-1370. Xeon is only available on containerized workloads: Intel Xeon Gold 6338N and Xeon Platinum 8480+, 3rd and 4th editions.

Benchmarks

Geekbench 6

OpenSSL

The benchmark results are in bytes per second processed. To compare with other results check this list.

  • MD5 - 621557080
  • SHA-1 - 846598140
  • SHA-256 - 414486530
  • SHA-512 - 555424090
  • DES - 0.00
  • 3DES - 31132330
  • AES-128 - 1321350490
  • AES-192 - 1156325380
  • AES-256 - 994547030
  • RSA Sign - 1807.0
  • RSA Verify - 61640.4
  • DSA Sign - 4261.3
  • DSA Verify - 4984.3

OpenVINO Benchmark Tool

The guide is available here, we will use OpenVINO 2023.1, Python 3.10 and resnet-50 Tensorflow model. All code is available on tutorials dir inside Edge Notebook interface.

Download the models:

!omz_downloader --name resnet-50-tf -o models

Next step - convert them to pre-trained model with FP32 quantization:

!mo \
--input_model models/public/resnet-50-tf/resnet_v1-50.pb \
--input_shape=[1,224,224,3] \
--mean_values=[123.68,116.78,103.94] \
-o models/FP32

Next we should select the target node - where the code will run. Let's pick Xeon Platinum 8480+ - this benchmark will run on CPU only. Benchmark time period - 60 seconds.

qsub benchmark_app_job.sh -l nodes=1:idc092 -F "results/ CPU throughput" -v VENV_PATH,OPENVINO_RUNTIME

qsub benchmark_app_job.sh -l nodes=1:idc082 -F "results/ GPU throughput" -v VENV_PATH,OPENVINO_RUNTIME

Results:

Virtual Machines Pricing

  • Xeon Platinum 8480+ - 6527.46 FPS
  • Intel Ark A770 Graphics (Alchemist) GPU - 1943.88 FPS
  • i7-13700 CPU - 157.88 FPS
  • Data Center Flex 170 GPU (16 GB VRAM) - 1710.32 FPS
  • Xeon Gold 6448Y CPU - 1146.83 FPS

We go very interesting result. Surely, Xeon 8480 is power horse! 56 cores and 105 MB cache - good enough to be pure winner. But is costs more than $10k! What I'm suggesting you to pick is Ark 770 - 1/3 performance from Xeon 8480, but the price is $300+ dollars. It even outperforms much powerful Flex 170 GPU, which is also very strange, my guess that GPU was under heavy load.

oneAPI

one-API is the open standard for a unified application programming interface (called also API) for multiple computing accelerators: GPUs, AI accelerators, gate arrays and more. In Devcloud, one-API is must have to write and test high-performance apps: IoT, rendering, deep learning and more. The main languages are C, C++ and SYCL.

Devcloud includes the next tools:

  • Intel® oneAPI Base Toolkit - tools and libraries for building high-performance apps.
  • oneAPI HPC Toolkit with Intel C++ Compiler, Fortran compiler and MPI library.
  • oneAPI AI Analytics Toolkit - DL training, inference, and data analysis.
  • oneAPI Rendering Toolkit - Open source libraries for images and video analysis.
  • OpenCL for FPGA development - Accelerate applications by targeting heterogeneous platforms with Intel CPUs and FPGAs.

IPv6

Unfortunately no global IPv6 found. And even more total silence in docs and manuals. What the heck, Intel? It's 2023!

Empty Pages - Intel, why?

Tons of empty pages, everywhere. Errors? Can't log in? Service not available? Doesn't matter - if something is broken - you have the empty page. Well, maybe it's worth adding an error code, a support link or even a forum page as a desperate measure, but here we go - empty pages for pure experience. It feels like using a falling website where part of it falling apart every minute.

Top benefits - why customer needs Devcloud?

  • Access to modern Intel hardware and the ability to prototype and test the applications on it, improve performance and elimitate problems.
  • Train and develop AI models without large investments in hardware.
  • Good documentation and huge set of examples, available in "one click".
  • Free tier: try before purchase and find out the pitfalls.
  • Large community and solid support level from a top company.

Read more