Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Real Time Text Classification Using Kafka and Scikit-learn

Vipul Vaibhaw

Artificial Intelligence / Machine Learning

Introduction:

Text classification is one of the essential tasks in supervised machine learning (ML). Assigning categories to text, which can be tweets, Facebook posts, web page, library book, media articles, gallery, etc. has many applications like spam filtering, sentiment analysis, etc. In this blog, we build a text classification engine to classify topics in an incoming Twitter stream using Apache Kafka and scikit-learn - a Python based Machine Learning Library.

Let's dive into the details. Here is a diagram to explain visually the components and data flow. The Kafka producer will ingest data from Twitter and send it to Kafka broker. The Kafka consumer will ask the Kafka broker for the tweets. We convert the tweets binary stream from Kafka to human readable strings and perform predictions using saved models. We train the models using Twenty Newsgroups which is a prebuilt training data from Sci-kit. It is a standard data set used for training classification algorithms. 

Training Classification Algorithm

In this blog we will use the following machine learning models:

We have used the following libraries/tools:

  • tweepy - Twitter library for python
  • Apache Kafka
  • scikit-learn
  • pickle - Python Object serialization library

Let’s first understand the following key concepts:

  • Word to Vector Methodology (Word2Vec)
  • Bag-of-Words
  • tf-idf
  • Multinomial Naive Bayes classifier

Word2Vec methodology

One of the key ideas in Natural Language Processing(NLP) is how we can efficiently convert words into numeric vectors which can then be given as an input to machine learning models to perform predictions.

Neural networks or any other machine learning models are nothing but mathematical functions which need numbers or vectors to churn out the output except tree based methods, they can work on words.

For this we have an approach known as Word2Vec. A very trivial solution to this would be to use “one-hot” method of converting the word into a sparse matrix with only one element of the vector set to 1, the rest being zero.

For example, “the apple a day the good” would have following representation

Word2Vec Methodology Example

Here we have transformed the above sentence into a 6×5 matrix, with the 5 being the size of the vocabulary as “the” is repeated. But what are we supposed to do when we have a gigantic dictionary to learn from say more than 100000 words? Here one hot encoding fails. In one hot encoding the relationship between the words is lost. Like “Lanka” should come after “Sri”.

Here is where Word2Vec comes in. Our goal is to vectorize the words while maintaining the context. Word2vec can utilize either of two model architectures to produce a distributed representation of words: continuous bag-of-words (CBOW) or continuous skip-gram. In the continuous bag-of-words architecture, the model predicts the current word from a window of surrounding context words. The order of context words does not influence prediction (bag-of-words assumption). In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words. 

Tf-idf (term frequency–inverse document frequency)

TF-IDF is a statistic which determines how important is a word to the document in given corpus. Variations of tf-idf is used by search engines, for text summarizations etc. You can read more about tf-idf - here.

Multinomial Naive Bayes classifier

Naive Bayes Classifier comes from family of probabilistic classifiers based on Bayes theorem. We use it to classify spam or not spam, sports or politics etc. We are going to use this for classifying streams of tweets coming in. You can explore it - here.

Lets how they fit in together.

Multinomial Naive Bayes classifier

The data from the "20 newsgroups datasets" is completely in text format. We cannot feed it directly to any model to do mathematical calculations. We have to extract features from the datasets and have to convert them to numbers which a model can ingest and then produce an output.
So, we use Continuous Bag of Words and tf-idf for extracting features from datasets and then ingest them to multinomial naive bayes classifier to get predictions.

1. Train Your Model

We are going to use this dataset. We create another file and import the needed libraries We are using sklearn for ML and pickle to save trained model. Now we define the model.

CODE: https://gist.github.com/velotiotech/d67d98aa2c94936dbbf03238d0676e33.js

2. The Kafka Tweet Producer

We have the trained model in place. Now lets get the real time stream of Twitter via Kafka. We define the Producer.

CODE: https://gist.github.com/velotiotech/ab09ae636aebc1368b0f012341bfca7d.js

Now we will define Kafka settings and will create KafkaPusher Class. This is necessary because we need to send the data coming from tweepy stream to Kafka producer.

CODE: https://gist.github.com/velotiotech/742cbc15b79157e5e9a1f85208d464fc.js

Note - You need to start Kafka server before running this script.

3. Loading your model for predictions

Now we have the trained model in step 1 and a twitter stream in step 2. Lets use the model now to do actual predictions. The first step is to load the model:

CODE: https://gist.github.com/velotiotech/794e8d8857b5a7559b5146ec73f6433d.js

Then we start the kafka consumer and begin predictions:

CODE: https://gist.github.com/velotiotech/4c3ee4b7851ef02d114bd8cbd8c3a726.js

Following are some of the classification done by our model

  • RT @amazingatheist: Making fun of kids who survived a school shooting just days after the event because you disagree with their politics is… => talk.politics.misc
  • sci.med
  • RT @DavidKlion: Apropos of that D'Souza tweet; I think in order to make sense of our politics, you need to understand that there are some t… => talk.politics.misc
  • RT @BeauWillimon: These students have already cemented a place in history with their activism, and they’re just getting started. No one wil… => talk.politics.misc
  • RT @byedavo: Cause we ain’t got no president => talk.politics.misc
  • RT @appleinsider: .@Apple reportedly in talks to buy cobalt, key Li-ion battery ingredient, directly from miners … => comp.sys.mac.hardware

Here is the link to the complete git repository

Conclusion:

In this blog, we were successful in creating a data pipeline where we were using the Naive Bayes model for doing classification of the streaming twitter data. We can classify other sources of data like news articles, blog posts etc. Do let us know if you have any questions, queries and additional thoughts in the comments section below.

Happy coding!

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Real Time Text Classification Using Kafka and Scikit-learn

Introduction:

Text classification is one of the essential tasks in supervised machine learning (ML). Assigning categories to text, which can be tweets, Facebook posts, web page, library book, media articles, gallery, etc. has many applications like spam filtering, sentiment analysis, etc. In this blog, we build a text classification engine to classify topics in an incoming Twitter stream using Apache Kafka and scikit-learn - a Python based Machine Learning Library.

Let's dive into the details. Here is a diagram to explain visually the components and data flow. The Kafka producer will ingest data from Twitter and send it to Kafka broker. The Kafka consumer will ask the Kafka broker for the tweets. We convert the tweets binary stream from Kafka to human readable strings and perform predictions using saved models. We train the models using Twenty Newsgroups which is a prebuilt training data from Sci-kit. It is a standard data set used for training classification algorithms. 

Training Classification Algorithm

In this blog we will use the following machine learning models:

We have used the following libraries/tools:

  • tweepy - Twitter library for python
  • Apache Kafka
  • scikit-learn
  • pickle - Python Object serialization library

Let’s first understand the following key concepts:

  • Word to Vector Methodology (Word2Vec)
  • Bag-of-Words
  • tf-idf
  • Multinomial Naive Bayes classifier

Word2Vec methodology

One of the key ideas in Natural Language Processing(NLP) is how we can efficiently convert words into numeric vectors which can then be given as an input to machine learning models to perform predictions.

Neural networks or any other machine learning models are nothing but mathematical functions which need numbers or vectors to churn out the output except tree based methods, they can work on words.

For this we have an approach known as Word2Vec. A very trivial solution to this would be to use “one-hot” method of converting the word into a sparse matrix with only one element of the vector set to 1, the rest being zero.

For example, “the apple a day the good” would have following representation

Word2Vec Methodology Example

Here we have transformed the above sentence into a 6×5 matrix, with the 5 being the size of the vocabulary as “the” is repeated. But what are we supposed to do when we have a gigantic dictionary to learn from say more than 100000 words? Here one hot encoding fails. In one hot encoding the relationship between the words is lost. Like “Lanka” should come after “Sri”.

Here is where Word2Vec comes in. Our goal is to vectorize the words while maintaining the context. Word2vec can utilize either of two model architectures to produce a distributed representation of words: continuous bag-of-words (CBOW) or continuous skip-gram. In the continuous bag-of-words architecture, the model predicts the current word from a window of surrounding context words. The order of context words does not influence prediction (bag-of-words assumption). In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words. 

Tf-idf (term frequency–inverse document frequency)

TF-IDF is a statistic which determines how important is a word to the document in given corpus. Variations of tf-idf is used by search engines, for text summarizations etc. You can read more about tf-idf - here.

Multinomial Naive Bayes classifier

Naive Bayes Classifier comes from family of probabilistic classifiers based on Bayes theorem. We use it to classify spam or not spam, sports or politics etc. We are going to use this for classifying streams of tweets coming in. You can explore it - here.

Lets how they fit in together.

Multinomial Naive Bayes classifier

The data from the "20 newsgroups datasets" is completely in text format. We cannot feed it directly to any model to do mathematical calculations. We have to extract features from the datasets and have to convert them to numbers which a model can ingest and then produce an output.
So, we use Continuous Bag of Words and tf-idf for extracting features from datasets and then ingest them to multinomial naive bayes classifier to get predictions.

1. Train Your Model

We are going to use this dataset. We create another file and import the needed libraries We are using sklearn for ML and pickle to save trained model. Now we define the model.

CODE: https://gist.github.com/velotiotech/d67d98aa2c94936dbbf03238d0676e33.js

2. The Kafka Tweet Producer

We have the trained model in place. Now lets get the real time stream of Twitter via Kafka. We define the Producer.

CODE: https://gist.github.com/velotiotech/ab09ae636aebc1368b0f012341bfca7d.js

Now we will define Kafka settings and will create KafkaPusher Class. This is necessary because we need to send the data coming from tweepy stream to Kafka producer.

CODE: https://gist.github.com/velotiotech/742cbc15b79157e5e9a1f85208d464fc.js

Note - You need to start Kafka server before running this script.

3. Loading your model for predictions

Now we have the trained model in step 1 and a twitter stream in step 2. Lets use the model now to do actual predictions. The first step is to load the model:

CODE: https://gist.github.com/velotiotech/794e8d8857b5a7559b5146ec73f6433d.js

Then we start the kafka consumer and begin predictions:

CODE: https://gist.github.com/velotiotech/4c3ee4b7851ef02d114bd8cbd8c3a726.js

Following are some of the classification done by our model

  • RT @amazingatheist: Making fun of kids who survived a school shooting just days after the event because you disagree with their politics is… => talk.politics.misc
  • sci.med
  • RT @DavidKlion: Apropos of that D'Souza tweet; I think in order to make sense of our politics, you need to understand that there are some t… => talk.politics.misc
  • RT @BeauWillimon: These students have already cemented a place in history with their activism, and they’re just getting started. No one wil… => talk.politics.misc
  • RT @byedavo: Cause we ain’t got no president => talk.politics.misc
  • RT @appleinsider: .@Apple reportedly in talks to buy cobalt, key Li-ion battery ingredient, directly from miners … => comp.sys.mac.hardware

Here is the link to the complete git repository

Conclusion:

In this blog, we were successful in creating a data pipeline where we were using the Naive Bayes model for doing classification of the streaming twitter data. We can classify other sources of data like news articles, blog posts etc. Do let us know if you have any questions, queries and additional thoughts in the comments section below.

Happy coding!

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings