Extruded Profiles Inspection Station

This project was my year-long senior design project where I worked with 2 others to design and build an automated defect inspection station for paint defects on aluminum extrusion. The defect detection station was to be used for quality control on 2 profiles of aluminum extrusions ranging in lengths of 13-20 feet, and in 8 different colors.

My two assignments throughout this project was researching and developing a method for image processing to locate the defects that are present on the part, as well as a method for increasing the accuracy of the defect classification process. Through my research and help from a Graduate Research Assistant, I determined that image masking and segmentation would be an extremely useful form of image processing that would allow for the location of defects to be determined. The process of finding the location of the defects on the image was done after the image was run through a Convolutional Neural Network (CNN), which one of my teammates developed, to determine if there was a paint defect in the image. ln our initial research we found that the classification of the types of defects present on a part can be greatly improved using synthetic images created with 3d modeling.

After making a python script to perform image masking a segmentation to find the location of defects in an image, I then focused on helping improve the accuracy of our CNN. To do this, synthetic defects were modeled to look as close to the real types of defects that were present on the samples as possible. To model synthetic defects, I learned how to use a powerful, free, and open-sourced 3d creation suite called Blender. This software allowed me to model the two types of aluminum extrusion, and model the position of light sources that would be present in the inspection station. Where the Blender software really shines is in its ability to take pictures of your 3d creation by specifying the angle, focal length, and position of a camera. What was even better about Blender was its easy to use Application Programming Interface (API), which can be used to change anything and everything about a 3D scene using Python. After modeling the 3D scene of the inspection station I then learned how to use Blender’s API. I developed a script that randomly rotated and translated a modeled defect around the aluminum extrusion, and changed the light angle of the scene. For each random rotation, translation, and light angle change, a picture was rendered to be used in the training of our CNN. After rendering the image, the part was stepped forward, simulating the part rolling past the detection system on a conveyer belt.

I believe my primary learning outcome of this project was how fulfilling Lean Engineering projects can be, and how you can achieve almost anything that an expensive system can do for far less cost with knowledge in programming.