Creating and maintaining an efficient and safe meat and poultry processing environment demands a steadfast focus on employee training. Yet getting management to adequately invest in programs, and workers to effectively embrace the necessary procedures, does not come easily.

In a wide embrace of technology, vision systems are being used throughout the entire process by meat and poultry processors. Some vision systems look at the quality of products at various points throughout the processing line. Some vision systems look at food safety standards. Then some vision systems look for label registration on packaging lines, as well as properly packaged product, proper seals and the presence of foreign materials.

In short, vision systems search for things that aren’t supposed to be anywhere along the entire line. Vision systems aren’t necessarily being used in any new operational areas by processors, but they are becoming more widely distributed throughout the process because of advancements in size and form.

“Some of the physical limitations are being overcome allowing vision systems to go into places they weren’t before, but they are not necessarily doing new tasks,” says Colin Usher, research scientist for Georgia Tech Research Institute (GTRI), in Atlanta.

 

Covering more

Lowered cost of components has allowed the fairly steady addition of vision systems into the industry, Usher further explains.

“Processing power is becoming more powerful, there are more system integrators out there and more people who are actually able to do the job than there have been historically,” he says.

Several developments are ongoing with vision systems technology too. The introduction and development of new 3D sensors is being heavily implemented into vision systems offering 3D and shape conformity types of measurements that historically used to be 2D computer vision problems, Usher says.

Sensor fusion is another area of development, where vision systems are now looking at several cameras or 3D sensors or even metal detectors, ultra-sonic detectors and other sensors that are being fused with the image data to make better and more robust decisions, Usher says. 

Another area of development is edge computing. While not necessarily a new concept, edge computing allows processors to have more powerful types of image processing and machine learning and algorithmic work local with the camera. 

“So instead of having a camera on a line somewhere that connected to a large computer in an office somewhere or that’s connected on the Internet to the cloud, instead you are having these single-board, very small computers that can be packaged right next to the camera and put over the line and all the processing is contained there in that one system,” Usher says.

Recently, GTRI has leveraged its expertise in sensing systems and is looking at them in the context of equipment and robotic devices where the vision system is an input and the system is doing some type of action, for example, material handling on a processing line. One of the projects that Usher directs uses video image processing and machine learning in the edge computing world. A robot drives around in a chicken house and identifies chickens and can categorize things, such as their movements, and quantify metrics to establish animal welfare in addition to doing utilitarian tasks such as picking up eggs on the floor.

While certain areas exist where vision systems are very successful on processing lines, some of the challenges to vision systems are looking at new areas processors can get into with vision systems that used to not be possible because of physical constraints and limited hardware processing power. 

“As we get significantly more powerful smaller computers, we are able to do more complicated processing tasks, but there is still a point where the computers aren’t powerful enough or the form factors aren’t quite right,” Usher says. “The challenges come into what are the advance programming techniques, what is the optimization of things that we can do, so we can make these vision systems work in all of these different environments.”

Usher expects in the near and distant future, the industry will continue to see more edge computing. Newer single-board computers and systems including some of the hardware that supports processing and advanced algorithms, such as machine learning, are going to be more compact, robust and less expensive. “You are going to see more opportunity to put these vision systems in more places that they haven’t historically been able to go whether that is because of the economy or because of physical characteristics,” Usher says.

In the distant future, Usher expects to see much smarter sensing integrated with much smaller devices that are able to do more dynamic processing. “There is a phrase that they call ‘processing on a lot of one,’” Usher says. “What that means is what if you are rapidly changing the type of product that you are manufacturing, can your system adapt to that on the fly? What are the sensing inputs and what are the changes to the mechanical systems to accommodate that to have much more flexible processing? Vision systems and sensing systems in general are going to be a key component to those types of things.” NP