Text Classification


Why is it interesting?

Service desks are regularly drowning in the amount of questions/complaints that are flowing in daily. Most of the time, these incoming messages have to be redirected to different departments which makes it a very time consuming and manual job.
A text classification application leveraging Machine Learning can analyze incoming messages and redirect them automatically.

This text classification algorithm can be embedded in your Service Desk software (for mails), your Chatbot and even be extended to your Call center (using voice-to-text).
All in all, it can increase efficiency in your processes, help you increase customer NPS and build a base for analyzing your mails and customer feedback smarter.

Imagine having a service desk that has to redirect tons of incoming messages every day to the right person/department. The service desk is supported by a classifying algorithm that redirects messages automatically with a certainty level...

Imagine having a service desk that has to redirect tons of incoming messages every day to the right person/department. The service desk is supported by a classifying algorithm that redirects messages automatically with a certainty level. Dubious cases are still reviewed by the service desk agent. For the purpose of this demo we created an example setup. You are an unsatisfied client of a financial institution and will write a financial complaint to the service desk.

  • Step 1: Write a complaint about a financial product. Feel free to write an elaborate issue, as a model is more accurate with more data input.
  • Step 2: Send your complaint and check the results!

To analyze and predict incoming complaints for financial institutions we used the open source programming language Python. To create our web page, Flask is leveraged as this allows us to easily pass along information from the web page to our python scripts and vice versa...

To analyze and predict incoming complaints for financial institutions we used the open source programming language Python. To create our web page, Flask is leveraged as this allows us to easily pass along information from the web page to our python scripts and vice versa. To optimize communication between the Flask server and the web app, all applications are put together in a Docker container. Docker is a software that allows applications to be packaged in a containerized environment, allowing the app to run on VM’s. In a final stage, Kubernetes is used to deploy the different docker containers and make them accessible through a website. The webpage is customized using HTML, CSS and Bootstrap.

First, the textual data is downloaded from an existing dataset from Kaggle, an online platform with lots of public datasets. Then, the textual data is transformed into a usable labelled dataset utilizing a tokenizer based on the 50.000 most used words in the dataset...

First, the textual data is downloaded from an existing dataset from Kaggle, an online platform with lots of public datasets. Then, the textual data is transformed into a usable labelled dataset utilizing a tokenizer based on the 50.000 most used words in the dataset. Next, every word is represented by a number and on top of this, padding is used to ensure that every vector has the same length. Finally, a deep learning model written in Python is trained on the labelled dataset using the Keras library that runs on top of TensorFlow.



« Back to homepage