The truth is, AI is better and faster at solving complex problems than both humans and human-coded computer programs. Knowing that, it makes sense to utilize it to make it work for us. But even so, there’s a gap in knowledge of computer vision and how to optimally integrate it into products. Companies need practical step-by-step guidelines to make it all happen. And I hope to bridge that gap in my talk at
Embedded vision is like computer vision, but is specifically embedded into devices like phones or baby monitors for practical use. Using little cameras that allow the device to “see” its environment, embedded vision products collect data and carry out all sorts of tasks. For example, doorbells can now identify who’s at the door and alert you without actually having to open the door. This way, you can determine if the person on the other side of the door is a complete stranger or the local FedEx guy delivering your package. This helps keep you safe, and is particularly great for people that live alone, or disabled or elderly persons.
Another example is industrial equipment. Little sensors and cameras in machines help monitor machinery and ensure it works the way it was intended. This minimizes risk and keeps productivity constant, as owners don’t have to worry about human error.
The thing is, we’ve come a long way from using computer vision to identify cats, dogs and cars via deep learning image recognition software. While recognizing around 1000 cat breeds is interesting, it’s not extremely applicable to everyday life. But with the right mindset and a little bit of creativity, businesses can use embedded vision to foster growth and unlock ways of completing tasks efficiently.
Little cameras can really make big things happen.
This is why I am so glad
Using cameras for automation is one of the fastest growing sectors across all industries. This is because people across all industries are starting to realize they need the insight and benefits embedded vision technology offers. It increases safety, such as automotive monitoring applications monitoring the body language of the driver for signs of drowsiness. It also speeds up procedures, such as analyzing a patient’s x-ray to determine if there are any anomalies. The fact is, embedded vision opens up doors to new opportunities so companies can grow. While in the past AI was mainly referenced in science fiction books and films, it has now reached a level of maturity where what was once science fiction is now science fact. AI is poised to transform all aspects of our lives, from identifying how severe a patient’s condition is via chatbots to recommending movies to watch on Netflix. It just wasn’t possible before, but now it is with AI and its subfields machine learning and deep learning. While the progress was slow at first, it’s now in a phase of rapid growth.
Just look at some real-world examples. The Amazon Echo ecosystem, automated Tesla cars, smartphones and shopping terminals all use Deep Learning and big data-based embedded vision for improving the user experience. Soon enough, it will be present in products across all industries.
AI is literally changing the way we do things from the inside-out at an accelerating pace. These solutions allow us to perform tasks faster and more reliably than with a human. AI software is also upgradeable for future pursuits, as everything should be pushing progress. AI also can teach itself to be better through self-improvement tests. In addition, while an expert may have lots of knowledge, the human brain can only store a finite amount of knowledge. A doctor may have lots of information to pull from in order to diagnose your condition, but it’s certainly not infinite. Using machine learning can allow devices to have an infinite amount of data to pull from in order to identify and solve issues. When you compare the human brain to machine learning, it’s obvious why AI is a hot topic.
For a concrete example, consider the ability for an autonomous car to recognize traffic signs. When it’s up to the driver, there’s still about a 1.68% error rate. That’s surprisingly high considering traffic signs are easy to recognize for most people. When using a system with a deep learning artificial neural network, it’s possible to reduce the error rate to about .14%, and the accuracy is always improving. This is how powerful AI can truly be.
At the Embedded Vision Summit, I’ll be presenting how you can leverage computer vision and AI to develop a transformational product to advance your business, win with customers and gain competitive advantage. I’ll explore in greater depth how you plan, design, and implement these technologies into any product or service, helping deliver greater value to your customer and your enterprise. I’ll go over troubleshooting and optimization of your embedded vision product, including how to make systems faster, smaller and more energy-efficient. And I’ll provide examples how we at Luxoft succeeded, failed, and learned from our failures in order to get to where we are today.
If you’ll be at the Embedded Vision Summit 2017 in Santa Clara,
Alexey Rybakov, interviewed by our staff blog writer Sarah Beaulieu.