I am going to demonstrate LSTM understand a sentence. The model I used explained this blog post.
Below video gives an example classifies the two questions that A is about payment and B is about an account. Both texts are what mix these two categories up and also reverse these sentence before and after each other.
The A (upper question) means in English “Thank you for helping a problem with an account. But, today, I get another problem about payment. I am sad about this happening.”
The B (lower question) means in English “Thank you for helping a problem with payment. But, today, I get another problem about an account. I am sad about this happening.”
These examples flip these means each other. And the A and B succeeded to classify categories. The model is sure of the categories because the score gets higher than the other scores. Let’s look at the score on the video. Below the predictions on the video shows the score, higher is better.
The 1st column (zero-based) express “other” category, 2nd is “account,” and 3rd is “payment.” The score like this:
predictions etc, other, account, payment 0.0038606818, 0.036638796, 0.04247639, 0.46222764
predictions etc, other, account, payment 0.0007114554, 0.04938373, 0.72704375, 0.0038164733
In the A, 3rd column is higher more than the other columns. It means the model is sure the A is about “payment” category. B is the same as A; it is certain of “account” category.
Thus, I found this model which uses LSTM may understand the sentence of a text.
- Use more samples
- Use 1-D convolutional network instead of LSTM
- Use pre-trained word embedding