Quality Control with AI


Why is it interesting?

Quality control in manufacturing firms often remains a manually intensive task. By smartly using AI and more precisely computer vision, we can automate our inspection processes, increasing the reliability of the quality control and thus improving our SLAs.

Computer vision in AI has now reached a high maturity level to truly and accurately support quality control helping you to increase end-product quality, to improve production speed and to grow customer satisfaction.

Imagine being a manufacturing firm that uses computer vision to monitor the quality of products on a conveyor belt...

Imagine being a manufacturing firm that uses computer vision to monitor the quality of products on a conveyer belt. For the purpose of this demo an example setup is built. You’ll need two things to get started: a camera that will be used as a monitoring device and a banana that will be used as a test object to perform the quality control on.

  • Step 1: Take a banana and show it to your camera.
  • Step 2: Click on one of the buttons to start predicting the ripeness of your banana.
  • Step 3: Smile!

To analyze and predict the ripeness of the banana we used the open source programming language Python. To create our web page, Flask is leveraged as this allows us to easily pass along information from the web page to our python scripts and vice versa...

To analyze and predict the ripeness of the banana we used the open source programming language Python. To create our web page, Flask is leveraged as this allows us to easily pass along information from the web page to our python scripts and vice versa. To optimize communication between the Flask server and the web app, all applications are put together in a Docker container. Docker is a software that allows applications to be packaged in a containerized environment, allowing the app to run on VM’s. In a final stage, Kubernetes is used to deploy the different docker containers and make them accessible through a website. The webpage is customized using HTML, CSS and Bootstrap.

First, we created a labelled dataset using around 20 images of every possible prediction. Next, for the analytics part a pre-defined, deep learning model structure from the Keras library is adopted which runs on top of TensorFlow...

First, we created a labelled dataset using around 20 images of every possible prediction. Next, for the analytics part a pre-defined, deep learning model structure from the Keras library is adopted which runs on top of TensorFlow. The adopted model is ResNet50, a Residual network of 50 layers, which we trained on our dataset. Finally, in order to get the images in a suitable format, the OpenCV library was used for some image preprocessing

Prediction

No prediction


« Back to homepage