There was a time when the notion of machines “thinking” was the stuff of fantasy. We had the tin man in the Wizard of Oz but that was fiction, a nice story created in Hollywood. The idea that a computer could think, solve problems, and learn from experience like a human was just too far-fetched in the early 20th century. But, science happens, and in the mid-1950’s, the first program, Logic Theorist designed to mimic human-like skill solving was created and the rest, as they say, is history.
Over the next twenty years, the concept of artificial intelligence evolved. Many thought that the attention artificial intelligence received after the Logic Theorist program was created would propel this new discipline into practical, real-world applications. But there was a problem; computer storage and speed requirements to process the amount of data used for running machine learning algorithms were just not up to snuff. To fully realize the potential of AI, computers needed to get faster with more storage. Lucky for us, computers did get faster, much faster, and as we learned from Moore’s Law, storage increased too, at a rate that eventually caught up to the requirements of AI.
Once we had computers that were fast with enough storage, the platform to truly implement machine learning algorithms was set. AI engineers began advancing this field of research, taking advantage of the new process power and data storage. Expert Systems, a program that used experts’ input on a variety of situations to help a “non-expert” make an “expert” decision, was created in the mid-80s. After Expert Systems, there was a steady stream of major milestones; IBM’s Deep Blue beat chess grandmaster champion Gary Kasparov, speech recognition hit the mainstream when Microsoft added Dragon Systems’ software to Windows, and even Kismet, a robot developed to recognize and display emotions, was created. This new age of machine learning had arrived and was about to change our world.
As machine learning continued to gain traction and the age of big data arrived, the conditions were right to bring AI to the mainstream. Applications that required huge amounts of data too large for a human to process were prime targets for AI programs. Banking, healthcare, marketing, manufacturing — all areas that could leverage big data with machine learning to produce insights that would be impossible or take too long without AI.
Computer vision is a subset of AI that uses data to help detect and recognize objects viewed by a computer’s camera. This area of AI is exciting because it can help not only recognize objects, it can recognize behavior and conditions of the environment viewed by the computer camera. With computer vision, a system can see and understand the world around it and that capability has huge implications on our lives. Imagine a world when computers, machines, can do the mundane tasks that people would otherwise have to perform or advanced tasks that improve outcomes through more data processing.
There are thousands of computer vision applications businesses are leveraging to automate or streamline various processes. These applications can be exciting, even lifesaving, such as in the Healthcare field where computer vision can detect cancer from CT scans better than doctors. In highly secure environments, retinal and fingerprint scanning can uniquely identify individuals to enable or restrict access. Wind turbines may be inspected for defects via autonomous drone footage with high-definition mounted cameras. But, these applications can be incredibly “boring” albeit practical such as monitoring products and packages that travel throughout the logistics journey. The logistics industry is one of the largest industries in the world so helping to maintain product and package visibility and ID can be hugely valuable to retailers, couriers, manufacturers, and even residential buildings trying to manage deliveries.
In my company, Position Imaging, we use computer vision technology to help multi-family property managers automate the package handling process and redirect the staff to manage residents, rather than packages. It provides an enhanced experience for the residents because they no longer have to wait for staff to pick up their packages. Packages can also be stored in rooms, not metal lockers, reducing the material resources required for this process. Couriers can deliver directly into a Smart Package Room where the computer vision technology virtually tags and monitors the location of each package, essentially keeping eyes on the packages 24/7 until the owner picks them up.
Logistics companies can also use computer vision to audit the package dimensions traveling through their hubs, enabling senders to easily and accurately measure package dimensions before shipping them. The lack of understanding of how to properly measure package dimensions can lead to misaligned expectations on how much shipping will cost versus how much a sender actually gets billed. By automating the manual task of measuring package dimensions, logistics companies can streamline and enhance the customer experience and reduce costs. For example, Py-tesseract is one such tool that enables data extraction from an image and detection option to further process a document after scanning.
As computer vision continues to expand into new applications, advances in both computer processing and energy efficiency become tantamount to fulfilling the promise of ubiquitous AI. These advances become especially important as these AI applications migrate to operate off the edge, on computers like mobile devices, drones, automobiles, etc. Companies like Qualcomm are committing major resources into research to advance AI systems. In addition to hardware advances like their AI accelerator architecture in the Hexagon 780 Processor, Qualcomm is also leading new techniques for improving the performance of the detection capabilities of computer vision systems with its work on their Gauge Equivariant Convolutional Neural Networks, enabling computer vision to better identify objects dimensions through improved detections of curved shapes (a challenge for computer vision systems). This work will surely help make AI not only more prevalent in everyday devices and IoT networks, it will also improve the performance of applications that utilize computer vision.
As people get more comfortable with AI and the notion of machines thinking and doing things for us, society is the biggest winner. Computers will increasingly manage the things we don’t want to do and, in some cases, they can do the tasks we need to do better. In the end, machines will be able to automate our lives in ways that will allow humans to focus on each other, not on the jobs essential for keeping society running… literally helping humans advance humanity. It’s an exciting new world and one that, like it or not, is here to stay. We should embrace it and remember, the next time you see a robot or machine doing a job that seems incredibly boring or trivial, like managing packages in a package room, thank AI because chances are, it’s doing that job so you don’t have to!