Processors continue to push the growth of vision systems in automating some procedures on processing and packaging lines. Historically, vision systems have been used for activities such as quality sorting in food streams and as control for pick and place robotics systems. 

As computer processors continue to advance in power and affordability, more advanced imaging systems are finally making it to market, says Colin Usher, research scientist for Georgia Tech Research Institute (GTRI) in Atlanta. For example, a new vision system recently introduced to the market uses an illuminated cone and advanced image processing algorithms in order to estimate the amount of residual meat left on a chicken frame after deboning.

“This system, along with other similar systems, is being used for statistical process control for management of processes that have historically been difficult to manage,” Usher says.

Vision systems also continue to develop with recent advancements including sensor fusion, 3D, 4D and advanced machine learning capabilities.

“Sensor fusion is the combination of multiple sensors that allows for unique algorithms for characterization of products and is not a new concept, however, economics are allowing for more available sensors and technology advancements are making these sensors much easier to utilize than ever before,” Usher explains. “3D imaging is gaining momentum as it allows for volumetric measurement of food products and enhanced ability to do shape analysis for quality control. 4D imaging is simply sensor fusion with color or RGB data and 3D data, allowing some enhanced capabilities. Finally, advancements in machine learning methods such as deep learning are allowing for more technologically impressive systems to be developed. Expect to see a lot more of this in the future.”

GTRI has developed and licensed several vision systems itself. The most recent development involved using an illuminated cone to image a chicken frame after it has been deboned as Usher mentioned previously.

“Using the characteristics of light travelling through the meat and bone, the system is able to identify the residual meat or yield loss remaining on the frame,” he explains. “This system uses sensor fusion with RGB and near infrared [NIR] imaging cameras and filters. Advanced algorithms allow for removal of bone and other non-meat structures from the data, and the remaining meat volume can be quantified. This system is currently for sale as a management tool for poultry deboning line operations around the world.”

Other systems licensed in the past include a vision system to control the cook level of baked goods as they exit an oven, and a vision system for characterizing poultry on a shackle line after de-feathering, Usher says.

Eye to the future

Even with all the progress in vision system capabilities, more advancement is still needed to help processors. For example, technology adoption remains a problem, Usher says. With respect to inspection, even though human inspectors are not 100 percent perfect, it is very difficult to sell a technology that is not greater than 99 percent accurate, especially when it concerns a food safety process, he says.

“There could be great benefit to managing complex processes with the ability to sense the process with accuracies as low as 85 to 90 percent, but the industry is hesitant to buy such systems due to the ‘low’ accuracy,” Usher explains. “One could argue that as the older generation of managers and engineers retires from the work force, newer, younger personnel will be more apt to adopt more smart automation, and I believe this is already occurring to a degree.” 

In addition, a lot of work is going into the concept of processing on a “lot size of one,” Usher says. This means that, via sensing, systems would be able to adapt and optimize their operation based on the physical characteristics of the single piece of product they are working on. “A robot that could adapt to each chicken on a deboning line and optimize the cuts, just like a human would do, would revolutionize that process,” Usher gives as an example.

Probably the greatest advancement in machine vision that will serve the meat and poultry industry is deep learning, says Perry West, president and founder of Automated Vision Systems Inc., a San Jose, Calif.-based consulting firm in the field of machine vision. “This has the potential to work better for quality control inspection to remove problem product or get a more consistent grading,” he says. “Nature’s products differ from manufactured products in that there is much greater variation. Much of machine vision development has been targeted toward manufacturing where the products should be highly consistent.”

While West sees the biggest challenge for machine vision is to handle the variation in the product that is inherent in meat and poultry, he expects to see progress in that area in the future.

Usher also sees a positive future for vision systems in meat and poultry processing.

“I imagine we will see an uptick in the adoption of advanced vision systems used for process management and equipment control,” Usher says. “Vision systems will eventually become a standard part of the equipment, working in tandem with the system, instead of being the ‘add-ons’ that we tend to see nowadays. Vision systems will even be moved further upstream, such as into the poultry houses or feed lots, to allow for better planning for processing, as a result of characterizing the live animals. Finally, vision systems are still a major component to any robotics system, and I believe we will also see more adoption of robotics systems as they become more robust and affordable.”  NP