Object detection MXNet + Raspberry Pi + Amazon Polly

April 13, 2017
I'm running the Inception model in MXNet to detect objects and I'm passing the output to Amazon Polly for text to speech. Please check out my introduction to the MXNet API here: https://medium.com/@julsimon/an-introduction-to-the-mxnet-api-part-1-848febdcf8ab

Transcript

Hey, Julien here. I've got this cool demo with a Raspberry Pi and an MXNet model for image recognition and Amazon Polly for speech-to-text. Let's try it with some objects and see how it goes. First object. I'm 86% sure that this is a coffee mug. It looks like a coffee mug. Let's try another one. A remote control. I'm 99% sure that this is a remote control. Yep, it is a remote control. And you can see here the logs on the Raspberry Pi. Let's try a third object. Hey, I've seen this guy before. I made it. I'm 86% sure that this is a mask. It is a mask. Alright, let's try a final one. Hmm, yeah, I've seen this one before too. I'm 87% sure that this is a wine bottle. Yeah, it is a wine bottle and it's not empty yet, but we're gonna work on that. And I hope you appreciate my super fancy camera stand. How's that? Pretty cool. Yep. And again, my logs on the Raspberry Pi. Well, that's it for today. Probably I will write an article about this, but I couldn't wait to show you the real thing in action. That's it. Bye-bye.

Tags

RaspberryPiMXNetAmazonPollyImageRecognitionDIYProject