Topic: gpu

DigitalOcean launches GPU Droplets to provide AI infrastructure as a service

DigitalOcean is launching GPU Droplets, which are NVIDIA H100 virtual servers for running AI infrastructures, allowing anyone to work with AI without needing to manage the underlying infrastructure.  NVIDIA H100 GPUs are one of the most powerful GPUs available today, according to DigitalOcean, and they include 640 Tensor Cores and 128 Ray Tracing Cores, which … continue reading

Lambda Labs launches 1-Click Clusters to provide on-demand access to GPUs for AI with a low reservation minimum

Lambda Labs, which is a company that provides on-demand access to GPUs in the cloud for AI, is democratizing who can train large AI models with the launch of Lambda 1-Click Clusters.  Normally, companies that offer access to GPU clusters require minimum reservations that ensure only the largest enterprises can afford them. But smaller companies … continue reading

GPUs Are Fast, I/O is Your Bottleneck

Unless you’ve been living off the grid, the hype around Generative AI has been impossible to ignore. A critical component fueling this AI revolution is the underlying computing power, GPUs. The lightning-fast GPUs enable speedy model training. But a hidden bottleneck can severely limit their potential – I/O. If data can’t make its way to … continue reading

Akash updates its “Supercloud for AI” with easier access to GPUs

Overclock Labs, creators of the open-source distributed network Akash, aims to tackle the difficulty that comes with looking for on-demand compute with new updates to its Supercloud, essentially a “cloud of clouds” that enables users to access compute resources, including GPUs, from a wide array of providers, spanning from independent to hyperscale, according to Akash.  … continue reading

ITOps Times Open-Source Project of the Week: Llama2 WebUI

With this project, users can run Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac).  It supports Llama-2-7B/13B/70B with 8-bit, 4-bit. It also supports GPU inference (6 GB VRAM) and CPU inference. Llama 2 is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to … continue reading

NVIDIA announces new processors, collaboration tools, and supercomputer building blocks

NVIDIA has announced a number of new product announcements and updates at the GPU Technology Conference. Here are a few highlights: New processor featuring a data-center-infrastructure-on-a-chip architecture (DOCA) The new BlueField-2 DPU will enable breakthroughs in networking, storage, and security performance, NVIDIA explained.  The new processor is optimized to offload critical networking, storage, and security … continue reading

NVIDIA launches data center inference platform for AI-powered services

NVIDIA announced two advances to its GPU technology this week at GTC Japan, both aimed at AI-powered voice, video, image and recommendation inference acceleration. The first is NVIDIA TensorRT Hyperscale Inference Platform, an inference software solution which runs on the second component of the announcement, the NVIDIA Tesla T4 GPU, based on the NVIDIA Turing … continue reading

SQream and X-IO collaborate on analytics for massive datasets

GPU database developer SQream announced their first “technological collaboration” with enterprise data storage and advanced computing company X-IO Technologies today — the integration of SQream’s GPU-based edge computing and X-IO’s compact 2U form-factor-based Axellio database and storage technology for rapid data analytics for massive datasets. According to the announcement, benchmarks for the collaborative solution showed … continue reading

DMCA.com Protection Status

Get access to this and other exclusive articles for FREE!

There's no charge and it only takes a few seconds.

Sign up now!