Use of vision systems in meat and poultry processing
Assuring quality through vision
While the fundamental design and operation of vision systems currently available in the area of quality assurance haven’t changed dramatically in recent years, what is changing is the increased use of vision systems in this space due to lower prices for the components. In addition, vision systems are becoming cheaper for integrators to put into their processes, lowering costs across the board.
“You are seeing more vision systems in areas that traditionally did not have them before due to cost constraints,” says Colin Usher, senior research scientist for Georgia Tech Research Institute in Atlanta.
For example, for cooked products, it is standard for companies to have vision systems looking at shapes and sizes of products. Companies are adding vision systems in line with weighers along with other tasks to make sorting systems that are based both off of the visible quality characteristics in addition to weight or any other specification. These tasks historically might have been manually inspected.
Suppliers of vision system hardware provide a robust set of image processing algorithms and libraries for integrators. This software does a good job of detecting things such as label and shape conformities, Usher says. In turn, many vision system integrators are selling solutions in which meat and poultry companies can tailor the system to the process because many of the image processing problems have several of the same core dependencies. For example, image segmentation, identifying products in the image and shape analysis are common.
The greatest advancement that the industry is seeing in quality assurance vision systems is the application of machine learning and cloud computing, Usher says. Machine learning algorithms can happen locally, on the edge or cloud-based, which means the decisions the system makes are done on a computer housed somewhere in a data center and uses the Internet. For example, every time people use a voice-to-text feature on their cell phones, they are using cloud-computing services.
Several providers now offer cloud services that have compute capability, which means vision systems can capture images and send them over the Internet to a computer in a data-center which does the heavy lifting. The decision made by the system is then done for the processor in the cloud. While computations happen in real time, the reality is the analysis is happening off-site.
“Cloud-based systems also are allowing for cheaper end hardware to make more computationally intensive decisions,” Usher explains. “It hasn’t quite trickled down to be common in the industry yet, but that’s what’s coming.”
Another way cloud computing is used is during training for the machine learning algorithms, where the algorithms are trained on a high-power computer in the cloud using input data. Once a model is learned, the computational power required for using the model is substantially less, so a local system can use the model without the need of the cloud computing services.
The current state-of-the-art for machine learning is using deep neural networks (DNNs). Industry leaders in this area have application program interfaces and algorithms that allow integrators to train machine vision systems for whatever types of tasks the processor needs, and then deploy those learned models on computers that run locally or use the cloud services, Usher explains.
While vision systems are becoming more prolific in the industry, many problems still exist depending on a processor’s requirements.
“There are still a lot of quality assurance applications where there is no solution simply because there are not enough integrators with the technical know-how to custom design it for that specific application requirement,” Usher says.
With machine learning in its adolescence in the industry, Usher expects the industry will continue to see that being applied as it becomes more flexible. In turn, when someone solves a particular problem in this cloud computing environment, it can then become available for others to use if the developers allow it.
Moving forward, Usher thinks the industry will continue to see a combination of machine learning taking over many of these applications along with the introduction of cloud computing services. Many meat and poultry processors also will need to address restrictive policies that make it challenging to work with cloud computing services. “The reality is that the solutions that cloud computing can provide are so attractive that they’re going to need to be changing those policies or looking hard at what those policies are,” Usher says.
Usher also thinks the industry is evolving from distributed quality assurance systems doing a specific task into more of a system of systems approach. For example, the system of systems might tie in traceability from farm to fork.
“All of that is going to be tied together, and an integral part of that system is going to be these quality assurance vision systems that are collecting this data,” Usher says. NP