Live results are available for the first machine learning inference benchmark that was performed on an Amazon Alexa device.
It was presented on October 26th at IBM’s Smarter Planet Expo in San Francisco, during their “Deterministic Accuracy with Cognitive Services” session.

MLPerf Official results for Inference v0.5
Since machine learning inference is limited to model constants, the use of Amazon Alexa as a device was chosen in order to mimic modern day living conditions, while still meeting high probability performance benchmarks.
“The artificial intelligence field is at a critical turning point: we are starting to see the first real businesses emerge that leverage machine learning methods to solve problems that weren’t previously considered possible,” says Eddie Wu, Vice President of Industry Research and Advisory Services at Gartner. “However, conventional models are required for most platforms, which will present challenges for adding new components to the learning process. One advantage of machine learning inference on Amazon Echo is that it is focused on less conventional training problems, and is somewhat easier to train on existing enterprise customer databases.”
Background Information
Machine learning and machine learning inference on Amazon Echo use two methods of training.
The first method allows a model to receive a derived term of information. It models functions in terms of deterministic data, and is the main artificial intelligence technique used in the industry. It is a “one-time product of knowledge”, and can be tuned for different degrees of accuracy. The Echo API includes only deterministic parameters such as what a user says.
The second method is called “continuous data input”, and allows models to receive input data such as tone and pressure. It reproduces how the user is saying words, and manages generation of the predicted outcome. Echo currently uses continuous data input for object detection, but can be adapted to any object of interest (“voice”).
Shares