NVIDIA announced two advances to its GPU technology this week at GTC Japan, both aimed at AI-powered voice, video, image and recommendation inference acceleration. The first is NVIDIA TensorRT Hyperscale Inference Platform, an inference software solution which runs on the second component of the announcement, the NVIDIA Tesla T4 GPU, based on the NVIDIA Turing architecture.

The release will improve on the NVIDIA platform’s ability to provide “enhanced natural language interactions and direct answers to search queries rather than a list of possible results,” the company wrote in the announcement.

According to NVIDIA, the key elements of NVIDIA TensorRT Hyperscale Inference Platform include:

  • NVIDIA Tesla T4 GPU – Featuring 320 Turing Tensor Cores and 2,560 CUDA cores, this new GPU provides breakthrough performance with flexible, multi-precision capabilities, from FP32 to FP16 to INT8, as well as INT4. Packaged in an energy-efficient, 75-watt, small PCIe form factor that easily fits into most servers, it offers 65 teraflops of peak performance for FP16, 130 teraflops for INT8 and 260 teraflops for INT4.
  • NVIDIA TensorRT 5 – An inference optimizer and runtime engine, NVIDIA TensorRT 5 supports Turing Tensor Cores and expands the set of neural network optimizations for multi-precision workloads.
  • NVIDIA TensorRT inference server – This containerized microservice software enables applications to use AI models in data center production. Freely available from the NVIDIA GPU Cloud container registry, it maximizes data center throughput and GPU utilization, supports all popular AI models and frameworks, and integrates with Kubernetes and Docker.

“Every day, massive data centers process billions of voice queries, translations, images, videos, recommendations and social media interactions,” the company wrote in the announcement. “Each of these applications requires a different type of neural network residing on the server where the processing takes place. To optimize the data center for maximum throughput and server utilization, the NVIDIA TensorRT Hyperscale Platform includes both real-time inference software and Tesla T4 GPUs, which process queries up to 40x faster than CPUs alone,” the company wrote.