DeePhi Tech Showcases Speech Recognition Engine on Amazon Web Services
BEIJING, March 6, 2018
With the advantage of flexibility, FPGA has rapidly increased the attention of cloud service providers and developers. The world’s largest cloud provider, Amazon Web Services (AWS), partnered with Xilinx, a leading FPGA vendor, which demonstrates that cloud service will be an important market for unleashing the potential of FPGA applications. As a partner of Xilinx and AWS, China’s leading deep-learning accelerating solution company, DeePhi Tech, officially launched an automated speech recognition engine – DDESE, based on the FPGA platform at AWS. DeePhi Tech has become the first Chinese AI start-up to launch FGPA speech recognition acceleration program at AWS.
The major cloud service providers, such as AWS, have begun to deploy FPGA cloud accelerating services. Through the influence and service capability of the large data center, FPGA affects hundreds of thousands of application developers and enterprise users and is quickly receiving increased attention of the public cloud service.
“FPGA momentum in the cloud is being powered by innovators like DeePhi,” said Jeff Hennig, Head of Xilinx Ventures. “The Xilinx investment in DeePhi is proof of our commitment to support and partner with startups pioneering AI and accelerated computing in next generation data centers.”
The DDESE focused speech recognition scene, an automatic speech recognition system based on FPGA is constructed to provide end-to-end speech recognition services for many users on AWS. The deep learning accelerating solution, which is based on FPGA platform, combined with algorithm, software and hardware co-design by DeePhi, is 2 times that of GPU in the calculation of audio recognition. DeePhi Tech is authorized by AWS to deploy FPGA speech recognition acceleration schemes in the cloud to provide accelerating services to global users. Through the co-optimization of software and hardware, more efficient cloud computing capability is achieved.
“With the help of DeePhi’s LSTM Engine, the latency of sequence based AI applications like ASR (Automatic Speech Recognition) and MT (machine translation) could be greatly reduced. The aim of launching DDESE on AWS is to show our ability of AI acceleration via ESE Engine which is base on Xilinx FPGA, as well as introduce the entire flow of DeePhi’s end-to-end ASR solution to more customers through the promotion on cloud,” said Shan Yi, the CTO of DeePhi. “We are looking forward to building the ecosystem of AI acceleration on cloud with Xilinx and AWS.” The launch of DDESE, automatic speech recognition acceleration engine at AWS, marks that DeePhi is the first Chinese AI start-up to launch AWS online technology and the first Chinese AI company to launch an accelerated solution based on the FPGA platform.
The speedup ratio for the end to end latency of this, the automatic speech recognition engine, comparing GPU (Tesla P4 + cudnn6020, CPU (Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz, 8 processors, is 2.06 times and 6.59 times respectively. Also, the LSTM part is 2.82 times and 8.64 times respectively. In the future, the combination of FPGA and cloud services will continue to expand and deepen. DeePhi Tech will continuously optimize the performance of the technology, activate the acceleration of FPGA, and leverage the full capabilities of public cloud services.
DeePhi Tech was founded by deep learning accelerating solution researchers from Stanford and Tsinghua University. The potential and development speed of DeePhi has always been in the leading position in the field of FPGA. It keeps long-term and deep cooperation with FPGA giant Xilinx. The research of ESE speech recognition engine won the best thesis award of FPGA 2017, a top-level conference in the field of FPGA chip.