Live video analysis with a Raspberry Pi and Amazon Rekognition Video

May 24, 2018
Demo of Amazon Kinesis Video Streams and Amazon Rekognition Video on a Raspberry Pi! ⭐️⭐️⭐️ Don't forget to subscribe and to enable notifications ⭐️⭐️⭐️ ⭐️⭐️⭐️ Want to buy me a coffee? I can always use more :) https://www.buymeacoffee.com/julsimon ⭐️⭐️⭐️ Full instructions in this blog post: https://aws.amazon.com/blogs/machine-learning/easily-perform-facial-analysis-on-live-feeds-by-creating-a-serverless-video-analytics-environment-with-amazon-rekognition-video-and-amazon-kinesis-video-streams/ Github repository to build the client: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp Please follow me on Medium (https://medium.com/@julsimon) or Twitter (https://twitter.com/julsimon).

Transcript

Hi, this is Julien from Arcee. In this video, I want to show you a quick demo of streaming a live video from a Raspberry Pi into Amazon Rekognition Video, and we'll see if we can recognize my face in the stream. So I've got a Raspberry Pi here with a Pi camera, and I built a Kinesis client that is going to capture the video and then stream it into Kinesis Video Streams. All instructions for this actually come from a superb blog post by one of my colleagues, and I will put the URL into the video description. So let's start this client. I'm going to stream at 320 by 200 resolution, 256 kilobit bitrate. Here we go. Okay, so we can see Kinesis actually receiving the video fragments here, being pushed by the Kinesis client. In a few seconds, when those things get to AWS, as you can see on my screen over there, let's give it a few seconds to stabilize. You can see my live video in the Kinesis Video Streams console, right? It says 320 by 200, the bitrate is a little over 270 kilobits, and we have about 33 seconds delay, which is the amount of time it takes to send all this stuff into Kinesis and for the console to process it. There's just a little bit of delay, but again, I don't have a lot of bandwidth right here. You might hear some beeping noises; it's actually emails being sent through SNS because Rekognition Video is detecting faces in this stream. So, let me look at the camera and see if we can actually see some of those emails saying I'm in here, right? Okay, this should be enough. Yes, and you have to trust me. It says one known person, right? And this is me because I built a face collection with my face in there. So Rekognition Video actually matches my face from the video into the collection. So there you go, really short video. If you want to replicate this, the only thing you need is a Raspberry Pi, a camera, read the blog post that is referenced in the description, and within maybe an hour or two, you can get this thing running too. It's really cool. You can build lots of cool things with this. That's it for today. Thank you. Bye bye.

Tags

RaspberryPiAmazonRekognitionKinesisVideoStreamsFaceRecognitionLiveStreamingDemo

About the Author

Julien Simon is the Chief Evangelist at Arcee AI , specializing in Small Language Models and enterprise AI solutions. Recognized as the #1 AI Evangelist globally by AI Magazine in 2021, he brings over 30 years of technology leadership experience to his role.

With 650+ speaking engagements worldwide and 350+ technical blog posts, Julien is a leading voice in practical AI implementation, cost-effective AI solutions, and the democratization of artificial intelligence. His expertise spans open-source AI, Small Language Models, enterprise AI strategy, and edge computing optimization.

Previously serving as Principal Evangelist at Amazon Web Services and Chief Evangelist at Hugging Face, Julien has helped thousands of organizations implement AI solutions that deliver real business value. He is the author of "Learn Amazon SageMaker," the first book ever published on AWS's flagship machine learning service.

Julien's mission is to make AI accessible, understandable, and controllable for enterprises through transparent, open-weights models that organizations can deploy, customize, and trust.