Nvidia has unveiled a new Artificial Intelligence (AI) supercomputer to power researchers offering training in AI.

CEO of Nvidia, Jensen Huang said the new supercomputer (DGX-2) will be shipped later this year.

He mentioned that, it is the world’s first system to sport a whopping two petaflops of performance.

According to Huang, the company has doubled the memory capability of its Tesla V100 GPU, which the company claims delivers the performance of up to 100 CPUs in one graphics processor.

He mentioned that, a number of manufacturers including Dell EMC, Hewlett Packard, IBM, and Lenovo will roll out systems that use the GPU in the next few months.

Oracle according to him, will also roll out its system in the second half of 2018.

Equipped with these systems, researchers will be able to train smarter deep-learning AI more quickly than ever seen.

The company indicated that the new technologies will enable AI to create natural-sounding speech and text, and that researchers will need less time to train such networks.

Xuedong Huang, Microsoft’s head of speech and language, said the new V100 GPUs will “extend the accuracy of our models on speech recognition and machine translation reaching human capabilities and enhancing offerings such as Cortana, Bing, and Microsoft Translator.”