Little Cameras Make Big Things Happen – Learn How to Create and Optimize Embedded Vision Products

Our world is undergoing a massive transformation. Artificial Intelligence, or AI, has taken the world by storm, so much that futuristic self-driving cars might become the norm more quickly than you think. You see it in your phones, when you’re shopping online, and even in brick and mortar retail stores. Over time, smart assistants like Siri learn how to tend to your needs based on previous requests. Online, big names like Amazon suggest products for you based on past purchases. And even at your local grocery store, the checkout printer dispenses coupons you’re more likely to use. AI was merely a niche sci-fi term in the past, but now it’s changing the way you interact with the world. And as a result, everyone is hoping to hop on the bandwagon and utilize computer vision to their advantage.

The truth is, AI is better and faster at solving complex problems than both humans and human-coded computer programs. Knowing that, it makes sense to utilize it to make it work for us. But even so, there’s a gap in knowledge of computer vision and how to optimally integrate it into products. Companies need practical step-by-step guidelines to make it all happen. And I hope to bridge that gap in my talk at Embedded Vision Summit 2017, which is May 1st-3rd in Santa Clara, California.

Embedded vision is like computer vision, but is specifically embedded into devices like phones or baby monitors for practical use. Using little cameras that allow the device to “see” its environment, embedded vision products collect data and carry out all sorts of tasks. For example, doorbells can now identify who’s at the door and alert you without actually having to open the door. This way, you can determine if the person on the other side of the door is a complete stranger or the local FedEx guy delivering your package. This helps keep you safe, and is particularly great for people that live alone, or disabled or elderly persons.

Another example is industrial equipment. Little sensors and cameras in machines help monitor machinery and ensure it works the way it was intended. This minimizes risk and keeps productivity constant, as owners don’t have to worry about human error.

The thing is, we’ve come a long way from using computer vision to identify cats, dogs and cars via deep learning image recognition software. While recognizing around 1000 cat breeds is interesting, it’s not extremely applicable to everyday life. But with the right mindset and a little bit of creativity, businesses can use embedded vision to foster growth and unlock ways of completing tasks efficiently.

Little cameras can really make big things happen.

This is why I am so glad Luxoft is returning to Embedded Vision Summit. With over 10 years of deep software engineering background under our belt, I’m excited we’ll be able to contribute to the conference just like when we went last year. We have professional engineering teams that are ready to assist anyone that wants to take the next step and learn more. I want more people to be able to integrate computer vision into their real-world products in order to improve efficiency and solve real-world issues.

The Embedded Vision Alliance, who hosts the event, is an organization with around 50 members. While small, they make a big impact in the tech community by encouraging companies to take the concept of “computer vision as a universal sensor” and develop it into practical products. Hoping to inspire and empower product creators, the alliance connects development teams to appropriate ecosystem partners while offering insight into market research and technology trends in order to further their vision. While the first few years were rocky due to little interest in AI, now they’ve expanded past their initial focus of giving vision sensors to IoT devices. Within the last 2-3 years, they have experienced explosive growth, and are now a major player in the world of AI.

Using cameras for automation is one of the fastest growing sectors across all industries. This is because people across all industries are starting to realize they need the insight and benefits embedded vision technology offers. It increases safety, such as automotive monitoring applications monitoring the body language of the driver for signs of drowsiness. It also speeds up procedures, such as analyzing a patient’s x-ray to determine if there are any anomalies. The fact is, embedded vision opens up doors to new opportunities so companies can grow. While in the past AI was mainly referenced in science fiction books and films, it has now reached a level of maturity where what was once science fiction is now science fact. AI is poised to transform all aspects of our lives, from identifying how severe a patient’s condition is via chatbots to recommending movies to watch on Netflix. It just wasn’t possible before, but now it is with AI and its subfields machine learning and deep learning. While the progress was slow at first, it’s now in a phase of rapid growth.

Just look at some real-world examples. The Amazon Echo ecosystem, automated Tesla cars, smartphones and shopping terminals all use Deep Learning and big data-based embedded vision for improving the user experience. Soon enough, it will be present in products across all industries.

AI is literally changing the way we do things from the inside-out at an accelerating pace. These solutions allow us to perform tasks faster and more reliably than with a human. AI software is also upgradeable for future pursuits, as everything should be pushing progress. AI also can teach itself to be better through self-improvement tests. In addition, while an expert may have lots of knowledge, the human brain can only store a finite amount of knowledge. A doctor may have lots of information to pull from in order to diagnose your condition, but it’s certainly not infinite. Using machine learning can allow devices to have an infinite amount of data to pull from in order to identify and solve issues. When you compare the human brain to machine learning, it’s obvious why AI is a hot topic.

For a concrete example, consider the ability for an autonomous car to recognize traffic signs. When it’s up to the driver, there’s still about a 1.68% error rate. That’s surprisingly high considering traffic signs are easy to recognize for most people. When using a system with a deep learning artificial neural network, it’s possible to reduce the error rate to about .14%, and the accuracy is always improving. This is how powerful AI can truly be.

alexey_rybakov_.jpg

At the Embedded Vision Summit, I’ll be presenting how you can leverage computer vision and AI to develop a transformational product to advance your business, win with customers and gain competitive advantage. I’ll explore in greater depth how you plan, design, and implement these technologies into any product or service, helping deliver greater value to your customer and your enterprise. I’ll go over troubleshooting and optimization of your embedded vision product, including how to make systems faster, smaller and more energy-efficient. And I’ll provide examples how we at Luxoft succeeded, failed, and learned from our failures in order to get to where we are today.

If you’ll be at the Embedded Vision Summit 2017 in Santa Clara, check out my talk and find me at the Luxoft booth where we’ll have a number of hands on demos ranging from optimized Darknet/YOLO for smartphones, to very accurate real-time depth mapping, to obstruction removal computational photography, and more. Drop me a note here and I’ll follow up to schedule a time for us to meet. I look forward to seeing you at the show.

Yours,
Alexey Rybakov, interviewed by our staff blog writer Sarah Beaulieu.

Comments

Not to be published