## Lenear algebra for machine learning

I’ve been reviewing linear algebra, Mathematics for Machine Learning: Linear Algebra on Coursera. I finished the Week 2 module. This course is easy to understand as far. And I memorize what I did in week one and week two modules.

# The three properties of dot product

Commutative

$r \cdot s = r_i s_i + r_j s_j \\ = 3 \times -1 + 2 \times 2 = 1 \\ = s \cdot r$

Distributive

$r \cdot (s + t) = r \cdot s + r \cdot t$ $r = \begin{bmatrix} r_1 \\ r_2 \\ \vdots \\ r_n \\ \end{bmatrix} s = \begin{bmatrix} s_1 \\ s_2 \\ \vdots \\ s_n \\ \end{bmatrix} t = \begin{bmatrix} t_1 \\ t_2 \\ \vdots \\ t_n \\ \end{bmatrix} \\ s \cdot (s + t) = r_1(s_1 + t_1) + r_2(s_2 + t_2) + \cdot s + r_n (s_n + t_n) \\ = r_1s_1 + r_1t_1 + r_2s_2 + r_2t_2 + \cdot s + r_ns_n + r_nt_n \\ = r \cdot s + r \cdot t$

Associative over scalar multiplication

$r \cdot (as) = a(r \cdot s) \\ r_i(as_i) + r_j(a s_j) = a(r_is_i + r_js_j)$

And r dot r is equal to the size of r squared.

$r \cdot r = r_ir_i + r_jr_j \\ = r_i^2 + r_j^2 \\ r \cdot r = |r|^2$

# Cosine and dot product

cosine rule

$c^2 = a^2 + b^2 – 2ab \cos\theta$ $|r – s|^2 = |r|^2 + |s|^2 – 2|r||s|\cos\theta \\ (r-s) \cdot (r-s) = r \cdot r -s \cdot r -s \cdot r -s \cdot -s \\ = |r|^2 – 2s \cdot r + |s|^2 \\ -2s \cdot r = -2|r||s|\cos\theta \\ 2s \cdot r = 2|r||s|\cos\theta \\ r \cdot s = |r||s|\cos\theta$

It takes the size of the two vectors and multiplies by cos of the angle between them. It tells us something about the extent to which the two vectors go in the same direction.

$$\cos 0 = 1$$, $$r \cdot s = |r||s|$$.
Two vectors are orthogonal to each other, $$\cos 90 = 0$$, $$r \cdot s = |r||s| \times 0 = 0$$.
$$\cos 180 = -1$$, $$r \cdot s = -|r||s|$$.

# Projection

A light coming down from s. It’s the shadow of s on r. This is called the projection.

$\cos = \frac{adjecent}{hypotenuse} = \frac{adjecent}{|s|} \\ r \cdot s = |r| \underbrace{|s| \cos \theta}_{adjecent(|r| \times projection)}$

Scalar projection

$\frac {r \cdot s}{|r|} = |s| \cos \theta$

Vector projection

The scalar projection also encoded with something about the direction of r a unit vector.

$\frac {r \cdot s}{|r||r|}r = \frac {r \cdot s}{r \cdot r}r$

# Changing Basis

If you do the projection, two vectors must be orthogonal.

Convert from the e set of basis vectors to the b set of bases vectors.

This projection is of length 2 time $$b_1$$

$\frac {r_e \cdot b_1}{|b_1|^2} = \frac {3 \times 2 + 4 \times 1}{2^2 + 1^2} = \frac {10}{5} = 2$ $\frac {r_e \cdot b_1}{|b_1|^2} b1 = 2 \begin{bmatrix}2\\1 \end{bmatrix} = \begin{bmatrix}4\\2 \end{bmatrix}$

This projection is of length $$\frac{1}{2}$$ time $$b_2$$

$\frac {r_e \cdot b_2}{|b_2|^2} = \frac {3 \times -2 + 4 \times 4}{-2^2 + 4^2} = \frac {10}{20} = \frac {1}{2}$ $\frac {r_e \cdot b_2}{|b_2|^2} b2 = \frac {1}{2} \begin{bmatrix}-2\\4 \end{bmatrix} = \begin{bmatrix}-1\\2 \end{bmatrix}$

We get the original vector r from above.

$\begin{bmatrix}4\\2\end{bmatrix} + \begin{bmatrix}-1\\2\end{bmatrix} = \begin{bmatrix}3\\4\end{bmatrix}$

In the basis b, it’s going to be
$r_b = \begin{bmatrix} 2 \\ \frac{1}{2} \\ \end{bmatrix}$

We can redescribe original axis using some other axis, some other basis vectors. The basis vectors we use to describe the space of data.

# Basis, vector, and linear independence

Basis is a set of n vectors that:

• are not linear combinations of each other (linearly independent)
• span the space
• The space is then n-dimensional

# Applications of changing basis

We get minimus possible number for the noisiness.

## Proposal of CEDEC 2018

I proposed the automatic reply system for our customer support to CEDEC 2018. Last week, CEDEC 2018 committee announced proposals adoptions. My proposal was not adopted, I am afraid. The causes I thought is that I just created an automatic reply system so I should have included about applying the system to our customer support and operation and feedback from our customer support on my proposal, but I did not finish these tasks yet. I will propose again what include considerations about above causes next year!

## LSTMの文の理解能力

English

LSTMの文の理解能力について、個人的に面白いと思ったので紹介したいと思います。今回使用した学習済みモデルはこちらの記事で紹介しています。

Aの文

この前はアカウントの引き継ぎの問題解決ありがとうございました。今回の不具合はアイテムを購入したのに反映されません。このようなことが続くのは悲しいです。

Bの文

この前はアイテム購入の問題解決ありがとうございました。今回の不具合はアカウントの引き継ぎができない問題です。このようなことが続くのは悲しいです。

これらは先に説明した通り、前後のカテゴリーを入れ替えて意味が逆になるようにしています。このモデルでは両方の質問に対してカテゴリーの分類に成功しています。動画の下の”predictions”の部分が各カテゴリーのスコア(確信度と呼ぶことにします)になっていて、この値が高いカテゴリーほどそのカテゴリーであるということを確信していることになります。

Aの文

predictions
etc, other, account, payment
0.0038606818, 0.036638796, 0.04247639, 0.46222764

Bのテキストの確信度は以下の通りです。こちらは2列が一番高い確信度が高いことからアカウントのカテゴリーに分類していることがわかります。

Bの文

predictions
etc, other, account, payment
0.0007114554, 0.04938373, 0.72704375, 0.0038164733

これらの一番確信度が高いカテゴリーは、他の確信度より大きな差をつけています。この結果から、LSTMが単純に単語からカテゴリーを分類するのではなく、文からカテゴリーを分類できているのではないかと思います。

# 今後検証したいこと

• サンプル数を増やす
• LSTMの代わりに1-D convolutional networkを使う
• 学習済みのword embeddingを使う

## Understanding sentence with LSTM

I am going to demonstrate LSTM understand a sentence. The model I used explained this blog post.

Below video gives an example classifies the two questions that A is about payment and B is about an account. Both texts are what mix these two categories up and also reverse these sentence before and after each other.

The A (upper question) means in English “Thank you for helping a problem with an account. But, today, I get another problem about payment. I am sad about this happening.”

The B (lower question) means in English “Thank you for helping a problem with payment. But, today, I get another problem about an account. I am sad about this happening.”

These examples flip these means each other. And the A and B succeeded to classify categories. The model is sure of the categories because the score gets higher than the other scores. Let’s look at the score on the video. Below the predictions on the video shows the score, higher is better.
The 1st column (zero-based) express “other” category, 2nd is “account,” and 3rd is “payment.” The score like this:

Sentence A

predictions
etc, other, account, payment
0.0038606818, 0.036638796, 0.04247639, 0.46222764

Sentence B

predictions
etc, other, account, payment
0.0007114554, 0.04938373, 0.72704375, 0.0038164733

In the A, 3rd column is higher more than the other columns. It means the model is sure the A is about “payment” category. B is the same as A; it is certain of “account” category.

Thus, I found this model which uses LSTM may understand the sentence of a text.

• Use more samples
• Use 1-D convolutional network instead of LSTM
• Use pre-trained word embedding

## LSTMを使ってテキストの多クラス分類をする

English

Kerasを使ってテキスト分類をするWebアプリケーションのプロトタイプを作ってみました。このプロトタイプはカスタマーサービスで利用することを想定してカスタマーからの質問に自動で返答することを考えます。質問はいくつかのカテゴリーに属していて、アプリケーションがそのカテゴリーを分類できるようにします。

サンプルのソースコードはGitHubを参照してください。

# データを集める

## ファイルフォーマット

ファイルはTSVで質問ID、質問テキスト、返答テキスト、カテゴリーを含んでいます。質問と返答テキストは日本語です。以下のような形式です。

このデータセットは約9000サンプルで、カテゴリーの種類は約15です。

## データをロードする

TSVファイルから読み込みます。

import json
import numpy as np
import csv

issues = []

with open("data/issues.tsv", 'r', encoding="utf-8") as tsv:

for row in tsv:
row = []
row.append(row[1]) # question
row.append(row[3]) # category

issues.append(row)

# テキストの前処理

## 使わない文字を削除

データセットのテキストデータにはe-mailのアドレスや記号など今回使用しない文字列が含まれているのでそれらを削除します。

filtered_text = []
text = ["お時間を頂戴しております。version 1.2.3 ----------------------------------------"]

for t in issues:
result = re.compile('-+').sub('', t)
result = re.compile('[0-9]+').sub('0', result)
result = re.compile('\s+').sub('', result)
# ... このような置換処理が複数繋がっています

# 質問テキストが空文字になることがあるのでその行は含めないようにします
if len(result) > 0:
sub_texts.append(result)

filtered_text.append(result)
print("text:%s" % result)
# text:お時間を頂戴しております。

## サンプルとラベルを作成します

データセットからサンプルとラベルを作成します。今回は全て使うのではなく15カテゴリーの中から例として”Account”と”Payment”の2カテゴリのみ使用します。それ以外は”その他”としてラベルづけします。サンプルはこの3
つのラベルで同じサイスである必要があります。データ数が偏ってしまうとLSTMでうまく分類できなくなってしまいます。今回は”Payment”のラベルが688サンプルしかなかったので、約700のサンプル数に揃えました。

サンプルとラベルを作成する

labels = []
samples = []
threshold = 700
cnt1 = 0
cnt2 = 0
cnt3 = 0

for i, row in enumerate(filtered_samples):
if 'Account' in row[2]:
if cnt2 < threashold:
cnt1 += 1
labels.append(2)
samples.append(row[0])
elif 'Payment' in row[2]:
if cnt3 < threashold:
cnt3 += 1
labels.append(3)
samples.append(row[0])
else:
if cnt1 < threashold:
cnt1 += 1
labels.append(1)
samples.append(row[0])

filtered_samplesは事前に記号などを削除したデータセットです。

## MeCabを使って分かち書きにする

お時間を頂戴しております

このテキストをMeCabで分かち書きに変換します。

import MeCab
import re

def tokenize(text):
wakati = MeCab.Tagger("-O wakati")
wakati.parse("")
words = wakati.parse(text)

# Make word list
if words[-1] == u"\n":
words = words[:-1]

return words

texts = [tokenize(a) for a in samples]

お 時間 を 頂戴 し て おり ます

# サンプルとラベルを分割する

サンプルとラベルとトレーニングデータと検証データに分割します。

from keras.preprocessing.text import Tokenizer
import numpy as np
from keras.utils.np_utils import to_categorical

maxlen = 1000
training_samples = 1600 # training data 80 : validation data 20
validation_samples = len(texts) - training_samples
max_words = 15000

# word indexを作成
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)

word_index = tokenizer.word_index
print("Found {} unique tokens.".format(len(word_index)))

# バイナリの行列に変換
categorical_labels = to_categorical(labels)
labels = np.asarray(categorical_labels)

print("Shape of data tensor:{}".format(data.shape))
print("Shape of label tensor:{}".format(labels.shape))

# 行列をランダムにシャッフルする
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]

x_train = data[:training_samples]
y_train = labels[:training_samples]
x_val = data[training_samples: training_samples + validation_samples]
y_val = labels[training_samples: training_samples + validation_samples]

data は以下のような整数のシーケンスなデータになっています。

[0, 0, 0, 10, 5, 24]

0以外の整数は分かち書きにした各単語と一致しています。0は単語がないことを意味します。上記の例だと3単語のため左の3列は0で埋められています。

# モデルの作成と学習

## モデルの作成

from keras.models import Sequential
from keras.layers import Flatten, Dense, Embedding
from keras.layers import LSTM

model = Sequential()
model.summary()

このモデルはLSTMの学習の他にEmbedding()を使ってword embeddingも同時に学習します。

## 学習する

model.fit()を呼ぶだけです。

history = model.fit(x_train, y_train, epochs=15, batch_size=32, validation_split=0.2, validation_data=(x_val, y_val))

## 結果をプロットする

%matplotlib inline

import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()

# モデルを保存する

モデルと学習した重みを保存します。

model.save('pre_trained_model.h5')

# Webアプリケーションを作成する

カテゴリーを予測する前にword indexを作成する必要があります。このword indexはモデルを作成した時と同じものです。

app.py

# 学習済みモデルをロードする

np.argmax(res[0])

ソースコードはこちらのリポジトリを参照してください。

# 参考文献

Deep Learning with Python こちらの書籍がとても参考になりました！Keras作者のCholletさんによって書かれているのでとてもオススメです。

## Multi-categorical text classification with LSTM

I created the prototype of a web application for customer service that uses sequence classification with Keras. This prototype’s purpose is to reply the proper response of some categories to our customer are based on the questions customer sent to us. The questions relate to some categories, and then the application predicts to which category a question belongs.

If you are looking for the same situation, this sample might be helpful for you.

You can see the whole source code in GitHub.

# Collect text data

Before creating a classification model, collect data set for creating it. Many classification’s articles on the internet use the IMDB movie review data set, I think. Instead, I use customer services’ question and its categories in our product. I collected this data and store as TSV file.

## File format

The format is TSV, and it consists id, question, answer, and the category of question like this:

This raw data set has about 9000 samples. But they include unusable data and have about 15 categories of question.

Load data from TSV formatted file.

import json
import numpy as np
import csv

issues = []

with open("data/issues.tsv", 'r', encoding="utf-8") as tsv:

for row in tsv:
row = []
row.append(row[1]) # question
row.append(row[3]) # category

issues.append(row)

# Pre-process text

## Remove unnecessary characters

These samples are rough for learning. It means that some sample has no question text, and has an e-mail address and symbol like a hyphen. So We have to remove these unnecessary characters.

I removed these with just regular expression and the
question which is an empty string like this:

filtered_text = []
text = ["長らくお時間を頂戴しております。version: 1.2.3 ----------------------------------------"]

for t in issues:
result = re.compile('-+').sub('', t)
result = re.compile('[0-9]+').sub('0', result)
result = re.compile('\s+').sub('', result)
# ... and many regular expression substitutions

# remove empty string question
if len(result) > 0:
sub_texts.append(result)

filtered_text.append(result)
print("text:%s" % result)
# text:長らくお時間を頂戴しております。

## Create samples and labels

Create samples and labels from the data set. It has about 15 categories of labels. And I select two label types, ‘Account’ as two and ‘Payment’ as three; they are question’s categories. And add the other all labels as one which includes the other categories excepts Account, Payment. The samples and labels have to be the same size roughly because LSTM learning wouldn’t work well if one of these is more or less. In this case, cap the samples’ size it’s 700 samples because the payment label has only 688 samples.

Create samples and labels

labels = []
samples = []
threshold = 700
cnt1 = 0
cnt2 = 0
cnt3 = 0

for i, row in enumerate(filtered_samples):
if 'Account' in row[2]:
if cnt2 < threashold:
cnt1 += 1
labels.append(2)
samples.append(row[0])
elif 'Payment' in row[2]:
if cnt3 < threashold:
cnt3 += 1
labels.append(3)
samples.append(row[0])
else:
if cnt1 < threashold:
cnt1 += 1
labels.append(1)
samples.append(row[0])

filtered_samples is what we removed some symbols, e-mail address or something like these from the samples.

## Separate the words by MeCab

The questions in the samples written in Japanese. So have to separate words into each word with space. Below is a question text in Japanese:

I used MeCab to get space-separated words:

import MeCab
import re

def tokenize(text):
wakati = MeCab.Tagger("-O wakati")
wakati.parse("")
words = wakati.parse(text)

# Make word list
if words[-1] == u"\n":
words = words[:-1]

return words

texts = [tokenize(a) for a in samples]

This tokenize function returns space-separated words:

## Divde the samples and labels

Divide the samples and labels into training data and validation data:

from keras.preprocessing.text import Tokenizer
import numpy as np
from keras.utils.np_utils import to_categorical

maxlen = 1000
training_samples = 1600 # training data 80 : validation data 20
validation_samples = len(texts) - training_samples
max_words = 15000

# create word index
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)

word_index = tokenizer.word_index
print("Found {} unique tokens.".format(len(word_index)))

# to binary class matrix
categorical_labels = to_categorical(labels)
labels = np.asarray(categorical_labels)

print("Shape of data tensor:{}".format(data.shape))
print("Shape of label tensor:{}".format(labels.shape))

# shuffle indices
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]

x_train = data[:training_samples]
y_train = labels[:training_samples]
x_val = data[training_samples: training_samples + validation_samples]
y_val = labels[training_samples: training_samples + validation_samples]

The data is integer sequese like this:

[0, 0, 0, 10, 5, 24]

Each non-zero integer relates to a word and the zero stands for “empty word.” Therefore, this words size is just three and the rest of the sequence will be filled with zero.

# Create a model and learn features

I used Keras for learning features. It includes LSTM and Word embedding. LSTM is used for a sequence classification problem, sequence regression problem and so on.

## Create a model

from keras.models import Sequential
from keras.layers import Flatten, Dense, Embedding
from keras.layers import LSTM

model = Sequential()
model.summary()

This model learns with LSTM and also word embedding with Embedding(...) at the same time. We can also use pre-trained word embedding instead learning word embedding.

## Learn features

Just call model.fit()

history = model.fit(x_train, y_train, epochs=15, batch_size=32, validation_split=0.2, validation_data=(x_val, y_val))

## Plot the result

%matplotlib inline

import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()

The result is like this:

Finally, the validation accuracy becomes about 90 percent.

## Save the model

Save the model and weights learned.

model.save('pre_trained_model.h5')

# Create a web application

I wanted to use the pre-trained model with a web application. So I used Flask this time because its language is the same as Keras. And this application is simple, receives a text, predicts and then responses its category to the user. This application has a text area, an ask button and the result of a prediction.

Predict a certain question

Before predicting a text, we have to calculate the word index the same as we created for creating the pre-trained model.

app.py

# load the pre traind model

# we have to pass padded_seq as 2-dimentional array

Get the classified result:

np.argmax(res[0])

Please see the whole source code in my repository.

# Reference

Deep Learning with Python This book helpful for me!

## Use Mono 2.X on Ubuntu 14.04 LTS

Unfortunately, I had to use the Mono which version is 2.X in my project. It’s very old, released in 2012! So I installed it on Ubuntu 16.04 LTS from source and do make. Then the error occurred in doing make like this:

./.libs/libmini-static.a(libmini_static_la-mini.o): In function mono_get_jit_tls_offset':
/home/vagrant/mono-2.11.4/mono/mini/mini.c:2506: undefined reference to mono_jit_tls'
/home/vagrant/mono-2.11.4/mono/mini/mini.c:2506: undefined reference to `mono_jit_tls'
collect2: error: ld returned 1 exit status
Makefile:1351: recipe for target 'mono' failed
make[4]: *** [mono] Error 1
make[4]: Leaving directory '/home/vagrant/mono-2.11.4/mono/mini'
Makefile:1209: recipe for target 'all' failed
make[3]: *** [all] Error 2
make[3]: Leaving directory '/home/vagrant/mono-2.11.4/mono/mini'
Makefile:344: recipe for target 'all-recursive' failed
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory '/home/vagrant/mono-2.11.4/mono'
Makefile:419: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/home/vagrant/mono-2.11.4'
Makefile:344: recipe for target 'all' failed
make: *** [all] Error 2

So I ended up using Ubuntu 14.04 LTS for Mono 2.X.

The error might occur by the libraries that are gcc or something related to compilation.

## How to install CUDA and cuDNN on Ubuntu 16.04 LTS

I’ve been running this machine for TensorFlow and Keras with Jupyter notebook.

These are my environments:

$cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"$ lspci | grep VGA
01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1)

# Disable secure boot on UEFI

First, we need to disable “secure boot” on UEFI menu because it prevents NVIDIA driver from loading into OS. So enter UEFI menu while boot time by pressing the key something like the delete key.

That’s the first time I heard that. I used to use BIOS, but now, a lot of motherboards recently released use UEFI. And UEFI has secure boot (see detail here, this was helpful for me).

If you use an ASUS motherboard, this document also can be helpful.

# Remove old NVIDIA driver and CUDA

$dpkg -l | grep nvidia$ dpkg -l | grep cuda
$sudo apt-get --purge remove nvidia-*$ sudo apt-get --purge remove cuda-*

# Install new NVIDIA driver and CUDA9.0

## Instatll

$sudo dpkg -i cuda-repo-ubuntu1604_9.0.176-1_amd64.deb$ sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
$sudo apt-get update$ sudo apt-get install cuda

~/.bashrc

export PATH="/usr/local/cuda/bin:$PATH" export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"

Then reboot.

sudo reboot

\$ ./mnistCUDNN
Test passed!

# Stop LightDM

LightDM start to run when you install CUDA. If you use Ubuntu server, it’s not necessary to run.

## Disable LigthDM

Rewrite GRUB_CMDLINE_LINUX like below:

/etc/default/grub

GRUB_CMDLINE_LINUX="systemd.unit=multi-user.target"

Update and reboot.

sudo update-grub
sudo reboot

# What happened

The error message like below happened on iOS when launching an app and crash immediately.

​<Error>: Could not successfully update network info during initialization.

Before this message happened, I purchased a lot of auto-renewal subscriptions, about 300 times, and restore over and over again for testing purchasing subscriptions.

# The solution

Restore your device to factory settings. For me, the error solved by it.

## Reproduce transactions as new transaction id when repurchasing an auto-renewable subscription

I was building an app which has auto-renewable subscriptions.
When I try to resubscribe the subscription I subscribed before, then iOS shows the dialog, “You’re currently subscribed to this.”, And enqueue all of the transactions I purchased in the past into a default transaction queue. The statuses of the transactions are SKPaymentTransactionStatePurchased. As a result of this behavior, I have to process vast of transactions. This happening was in the Sandbox environment.

I expected to enqueue only one transaction as SKPaymentTransactionStatePurchased.

The following list is my scenario that reproduces all transactions:
1. Subscribe a product
2. Subscribe the product purchased again
3. The dialog, “You’re currently subscribed to this.”, is popped up
4. Tap ok button
5. Press Home button then close my app
6. Tap and launch my app again

I tried to solve this problem throughout the day. Finally, I created a new sandbox user and just used it for purchasing subscription. Then this behavior has no longer occurred. I don’t know why such things happened.