The solution of “​<Error>: Could not successfully update network info during initialization.”

What happened

The error message like below happened on iOS when launching an app and crash immediately.

​<Error>: Could not successfully update network info during initialization.

Before this message happened, I purchased a lot of auto-renewal subscriptions, about 300 times, and restore over and over again for testing purchasing subscriptions.

The solution

Restore your device to factory settings. For me, the error solved by it.

Reproduce transactions as new transaction id when repurchasing an auto-renewable subscription

I was building an app which has auto-renewable subscriptions.
When I try to resubscribe the subscription I subscribed before, then iOS shows the dialog, “You’re currently subscribed to this.”, And enqueue all of the transactions I purchased in the past into a default transaction queue. The statuses of the transactions are SKPaymentTransactionStatePurchased. As a result of this behavior, I have to process vast of transactions. This happening was in the Sandbox environment.
 
I expected to enqueue only one transaction as SKPaymentTransactionStatePurchased.
 
The following list is my scenario that reproduces all transactions:
1. Subscribe a product
2. Subscribe the product purchased again
3. The dialog, “You’re currently subscribed to this.”, is popped up
4. Tap ok button
5. Press Home button then close my app
6. Tap and launch my app again

I tried to solve this problem throughout the day. Finally, I created a new sandbox user and just used it for purchasing subscription. Then this behavior has no longer occurred. I don’t know why such things happened.

The test cases for In-App Purchase on iOS

I came up to implement In-App Purchase on iOS with Unity.
I looked for the test cases for In-App Purchase, however, couldn’t find their cases. Therefore, I memorize the test cases I checked myself.

At first, please read In-App Purchase Best Practice. We should implement your In-App Purchase according to this document.

For instance, I implement below list:

  • Add a transaction queue observer at application launch
  • Query the App Store for product information before presenting your app’s store UI

Figure 1 shows the process, second of the list, from showing available products to purchasing them.

Figure 1.

  1. Retrieve product identifiers from a data store
  2. Request products from Apple Store
  3. Show available products on an app

Below, the test cases I checked. the left side of the table are test cases and the right side is expected results.

Purchasing and Subscribing test cases

Test case Expected result
Show a product list scene The product list scene shows buyable products
Buy products All products can buy
Close an app in the middle of a purchasing The item corresponds to the purchased product is given the user next launch
Disconnect internet connection in the middle of a purchasing The items correspond to the purchased are given the user next startup
Purchase a subscription product in advance. Switch an Apple account to another one on the same Device The purchased subscription product must get unbuyable
Purchase a subscription product by the same Apple account on deferent devices (purchase a product on Device-A, then purchase it on Device-B) Must purchase the product on Device-A and must not purchase it on Device-B
Purchase a subscription product with the same Apple account among two devices Must purchase the product on both devices
Purchase a subscription product in advance and close the app completely. Wait until the subscription is updated by Apple Store and launch the app Must give the user the items corresponding to the subscribed product

Restoring

Implement a restoring product for subscription products.

Test case Expected result
Restore the subscription after the subscription product is renewed The app gives the items that corresponding to the subscription to the user | Buy a subscription product with the same Apple account. Restore Device-A, next Device-B
Buy a subscription product in advance. Use both of devices, Device-A and Device-B. Then restore Device-A next Device-B An app gives the items corresponding to the subscription to Device-A, although it doesn’t on Device-B
Launch an app after closing the app in the middle of Restoring Purchased Products Do not execute unprocessed transactions
Sign out from App Store, then launch an app The user is prompted to sign in App Store if unprocessed transactions are existed
Tap Buy a product button multiple times Show only one In-App Purchase dialog
Use unregistered product identifiers in iTunes connect that it’s in part of an app’s buyable item list Do not show the unregistered products in app’s item list

CoreMLを使ったGenerative Adversarial NetworkのiOS Appを作る

English page

iOS上でCoreMLを使って手書き文字を生成するアプリを作成しました

手書き文字の生成からアプリの公開までをまとめておきたいと思います。

使ったソフトウェア

  • Xcode 9.2 (9C40b)
  • Docker 17.09.1-ce
  • TensorFlow docker imagesha256:1bb38d61d261e5c9230a1e60b5d200088eb03014fdb35a91859fa55ea0d2c4d5, https://hub.docker.com/r/tensorflow/tensorflow
  • TensorFlow 1.2.1
  • Keras 2.0.6
  • coremltools 0.7

手書き文字画像の生成からiOS appのリリースまで

図1は手書き文字画像の生成からiOS appのリリースまでの手順を表しています。

the flow of distributing an app
図1. iOS appリリースまでの手順

DockerのインストールしてTensorFlowとKerasが動作するJupyter Notebookを用意する

Kerasモデルを作成するためにDockerをインストールします。

以下のコマンドを実行します。

$ docker run -d --name notebook tensorflow/tensorflow:latest

KerasとCoreML converterをインストールする

$ docker exec -it notebook /bin/bash
$ pip install -U keras
$ pip install -U coremltools

Jupyter Notebookにアクセスできることを確認する

Jupyter NotebookにアクセスするためのURLを取得します。

$ docker logs notebook
# 以下のようなトークンがついたURLが表示されると思います
#
#    Copy/paste this URL into your browser when you connect for the first time,
#    to login with a token:
#        http://localhost:8888/?token=ecf1a2471670eb6863195ab530d6ac1d5cc27511faca0afe

上記のURLをブラウザで開くとアクセスできると思います。
これでKerasのコードを書けるようになります。

Kerasモデルを作成する

Jupyter Notebookに新しいノートを追加してGANモデルのコードを書きます。GANのコードはこちらを参考にしました。このコードをコピーしてcellに貼り付けます。モデルをディレクトリにセーブするために以下のようにコードを追加します。

if __name__ == '__main__':
    gan = GAN()
    gan.train(epochs=30000, batch_size=32, save_interval=200)
    gan.discriminator.save('./discriminator.h5') // discriminatorモデルを保存するディレクトリを指定します
    gan.generator.save('./generator.h5') // generatorモデルを保存するディレクトリを指定します

あとはcellを実行します。MacBook Pro 2016で実行した感じだと25分ほどかかりました。

このcellを実行すると学習済みモデルが指定のディレクトリに”.h5″のファイルが保存されます。

KerasモデルをCoreMLモデルに変換する

$ coremlconverter --srcModelPath ./keras_model.h5 --dstModelPath ./coreml_model.mlmodel --inputNames gunInput --outputNames ganOutput

このコマンドでKerasのモデルをCoreMLのモデルにコンバートします。コンバートが完了すると”.mlmodel”のCoreMLモデルファイルが生成されます。

手書き文字画像を生成するアプリを作成する

手書き文字画像を表示するにはWeb application、iOS、Androidや単純にJupyter Notebookを使う方法がありますが、iOSを触ってみたかったのでiOSにしました。

アプリを作成する

作成したアプリのソースコードはこちらで公開しています。

図2はアプリのスクリーンショットを示しています。


図2. アプリ (GANs generator)のスクリーンショット

このアプリはCoreMLを使用してKerasのモデルから28 x 28の手書き文字画像を生成します。手書き文字画像はUIKitを使用して表示します。また、手書き文字画像を再生成するボタンがあります。

手書き文字画像を生成する

最初にKerasモデルからコンバートしたCoreMLモデルをXcodeのプロジェクトにインポートします。図3のようになっていると思います。


図3. CoreMLモデルのプロパティ

Xcodeにモデルをインポートすると、モデルのクラスが自動的に生成されます。このクラスは手書き文字画像を生成するpredictionメソッドが実装されています。そして、predictionメソッドの第1引数にKerasモデルのinputが指定されます。第1引数のラベルにはCoreML converterの--inputNamesで指定した値が使用されます。この場合だとクラスのpredictionメソッドの第一引数のラベルをganInput、出力された手書き文字画像をganOutputとして定義しています。この定義は図3のように”.mlmodel”ファイルのModel Evaluation ParametersInputsOutputsで確認することができます。

// Create a gan instance
var model = gan()
// Generate a handwritten image
var output = model.prediction()

// render hand-written image
let HIGHT = 27
let WIDTH = 27

for i in 0...HIGHT {
  for j in 0...WIDTH {
    //create the path
    let plusPath = UIBezierPath()

    //set the path's line width to the height of the stroke
    plusPath.lineWidth = Constants.plusLineWidth

    //move the initial point of the path
    //to the start of the horizontal stroke
    plusPath.move(to: CGPoint(
      x: CGFloat(j * 10),
      y: CGFloat(i * 10) + Constants.plusLineWidth / 2
    ))

    //add a point to the path at the end of the stroke
    plusPath.addLine(to: CGPoint(
      x: CGFloat((j * 10) + 10),
      y: CGFloat(i * 10) + Constants.plusLineWidth / 2
    ))

    //set the stroke color
    let index: [NSNumber] = [0 as NSNumber, i as NSNumber, j as NSNumber]
    UIColor(white: CGFloat(truncating: out.gan_out[index]), alpha: CGFloat(1)).setStroke()

    //draw the stroke
    plusPath.stroke()
  }
}

このプロジェクトをXcodeで開いてシミュレーターで確認することができます。

Apple Developer Programに登録する

アプリの動作を確認したら、App Storeにこのアプリを公開します。もし、Apple Developer Programに登録してなけらば、こちらで登録を済ませます。アプリをApp Storeに公開するにはApple Developer Programの登録が必要です。

App Storeに登録する

詳しくは公式ドキュメントを参照してください。

  1. Archiving
    Product > Archive

  2. Validating
    archivingが完了するとArchive organizerValidateができるようになるのでApp Storeにアップロードする前に実行して、アプリに問題がないかチェックしておきます。

  3. App Storeにアプリをアップロードする
    Upload to App Storeをクリックします

アプリを公開する

詳しくは公式ドキュメントを参照してください。

  1. アプリをiTunes connectに登録する
  2. アプリのアイコンとプレビュー、スクリーンショットを追加する
  3. App Reviewに出す
  4. アプリをリリースする

Create a Generative Adversarial Network iOS App with CoreML

I created the app that generates a handwritten image with CoreML on iOS.

I’m going to explain the process to release this app from beginning to end.

The software versions

  • Xcode 9.2 (9C40b)
  • Docker 17.09.1-ce
  • TensorFlow docker imagesha256:1bb38d61d261e5c9230a1e60b5d200088eb03014fdb35a91859fa55ea0d2c4d5, https://hub.docker.com/r/tensorflow/tensorflow
  • TensorFlow 1.2.1
  • Keras 2.0.6
  • coremltools 0.7

The process to make handwritten image generator app on iOS and submit it to the App Store

As shown in figure 1, it illustrates the path of distributing this app through App Store.

  1. Create a GAN (Generative Adversarial Network) model on Keras
  2. Convert a Keras GAN model to CoreML model
  3. Create an app to show handwritten images generated by the CoreML model
  4. Distribute the app

the flow of distributing an app
Figure 1. The path of distributing this app through App Store

Install Docker and run Jupyter Notebook that includes TensorFlow and Keras

At first, To create a Keras model, we have to install Docker.

Install Docker

On Mac, get it in this page at the Stable channel and just install it.

Run Jupiter Notebook with TensorFlow

Fortunately, there is the Docker image that includes TensorFlow.

Just type this command:

$ docker run -d --name notebook tensorflow/tensorflow:latest

Install Keras and CoreML converter

$ docker exec -it notebook /bin/bash
$ pip install -U keras
$ pip install -U coremltools

Check that you can access the Jupyter Notebook

Before access it, you need to get an access token.

$ docker logs notebook
# You will find an access token like this:
#
#    Copy/paste this URL into your browser when you connect for the first time,
#    to login with a token:
#        http://localhost:8888/?token=ecf1a2471670eb6863195ab530d6ac1d5cc27511faca0afe

Copy the URL with an access token and access on your browser.

Now it’s done! You come can execute Keras code!

Make a Keras model

Add a new note to Jupyter Notebook and write a GAN model. I referred to this source code. Copy this code and paste to your Jupyter Notebook’s cell and add the code below. It will save a model in a directory.

if __name__ == '__main__':
    gan = GAN()
    gan.train(epochs=30000, batch_size=32, save_interval=200)
    gan.discriminator.save('./discriminator.h5') // Specify the directory saved the discriminator model
    gan.generator.save('./generator.h5') // Specify the directory saved the generator model

Then just run the cell. It took time about for 25 minutes on my MacBook Pro 2016 to training this model.

Keras create a learned model in the directory where executed the cell. It is followed by “.h5”.

Convert a Keras model to CoreML model

$ coremlconverter --srcModelPath ./keras_model.h5 --dstModelPath ./coreml_model.mlmodel --inputNames gunInput --outputNames ganOutput

This command uses TensorFlow in the background and converts to a CoreML model. After the converting, it makes a CoreML model is followed by .mlmodel in the specified directory. This file is a CoreML converted model from Keras model.

Create an app to show handwritten images

There are some ways to create an app that draws a handwritten image. For instance, as a web application or mobile application. This time, I created an app for iOS.

Create the app

Here is the source code for the app:
https://github.com/yanak/gangen/blob/master/Gangen/HandwrittenImage.swift

figure 2 is the app’s screenshot.


Figure 2. The GANs generator app’s screenshot

This app generates a 28 x 28 handwritten image using the CoreML model converted from Keras, and that renders the image by UIKit. There is the regenerate button that regenerates a handwritten image.

Generate a handwritten image

First, import the CoreML model converted from the Keras model to Xcode project directory. It shows like figure 3:


Figure 3. the CoreML model properties

After Xcode imports, a model, the class of a model is automatically generated by Xcode. this class has the prediction method and it specifies Keras model’s input as the first argument. The first argument’s label uses the name of the CoreML converter’s --inputName option value. In this case, The class defines the label of prediction as ganInput and a generated handwritten image’s shape as ganOutput. You can see its definitions in the Model Evaluation Parameters in the CoreML model(see figure 3.)

// Create a gan instance
var model = gan()
// Generate a handwritten image
var output = model.prediction()

// render hand-written image
let HIGHT = 27
let WIDTH = 27

for i in 0...HIGHT {
  for j in 0...WIDTH {
    //create the path
    let plusPath = UIBezierPath()

    //set the path's line width to the height of the stroke
    plusPath.lineWidth = Constants.plusLineWidth

    //move the initial point of the path
    //to the start of the horizontal stroke
    plusPath.move(to: CGPoint(
      x: CGFloat(j * 10),
      y: CGFloat(i * 10) + Constants.plusLineWidth / 2
    ))

    //add a point to the path at the end of the stroke
    plusPath.addLine(to: CGPoint(
      x: CGFloat((j * 10) + 10),
      y: CGFloat(i * 10) + Constants.plusLineWidth / 2
    ))

    //set the stroke color
    let index: [NSNumber] = [0 as NSNumber, i as NSNumber, j as NSNumber]
    UIColor(white: CGFloat(truncating: out.gan_out[index]), alpha: CGFloat(1)).setStroke()

    //draw the stroke
    plusPath.stroke()
  }
}

Open this project, you can check to run it in a simulator like this:

Enroll Apple Developer Program

After check running the app, submit it to the App Store! If you don’t enroll Apple Developer Program yet, you need to do it in here because it needs to submit the app to the App Store.

Submit to the App Store

In detail, see Submitting Your Apps

  1. Archiving
    To archive, on Xcode, Product > Archive

  2. Validating
    Xcode shows the archived app in the Archives organizer. Validate an archived app before uploading to App Store.

  3. Upload to iTunes connect
    Click Upload to App Store

Publish app

In detail, see iTunes Connect Developer Help page.

  1. Add an app in iTunes Connect
  2. Add app icon, app previews and screenshots.
  3. Submit an app to App Review
  4. Release an app
    

Use native code on iOS with Unity Native Code in the Unity

First, write native code you used for iOS. Read this document

I needed to write native code for the In-App Purchase on iOS.

Then, just put your native code in the Assets/Plugins/iOS directory. The hierarchy like this;

For example, using Objective-C, Assets/Plugins/iOS/sample.m and Assets/Plugins/iOS/sample.h.

Build on Unity

In File > Build Settings, switch iOS platform and then click Build and Run. Next, when the build complete, launch Xcode and build the project created by Unity on it automatically.

Build on Xcode

Initial build might occur a signing problem that is Team is None. To solve this problem, just select a team in the pull-down menu of the team. Automatically sign to your app as the team you selected.

Click the build button top left of the Xcode window!

Receive a callback as JSON from an iOS code

Set Game Object on a Unity script before call the native code

var gameObject = GameObject.Find("PurchaseHandler");
            if (gameObject != null) return;

            gameObject = new GameObject("PurchaseHandler");
            if (UnityEngine.Application.isPlaying)
            {
                GameObject.DontDestroyOnLoad(gameObject);
            }

            gameObject.AddComponent<PurchaseHandler>();

Call UnitySendMessage on iOS

sample.m

void GetProductList(const char* json)
{
    ...
    NSString *jsonString = @"{list: [ ... ]}"
    UnitySendMessage([@"PurchaseHandler" UTF8String], [@"onSuccess" UTF8String], [jsonString UTF8String]);
}

Receive a message of UnitySendMessage

public class PurchaseHandler : MonoBehaviour
    {
         private IEnumerator onSuccess(string message)
        {
            yield return doSomething(message);
        }
}

I went to iOSDC 2017

I went to iOSDC 2017 on September 17th.
This conference held for iOS developers on September from 16th to 17th at Waseda University.

I heard these sessions:

飛び道具ではないMetal
This session told us how to use Metal to render an image on display. What most interesting is that to draw a picture without Metal is faster than with it.

Apple TV – tvOS入門 –

React Native vs. iOSエンジニ

I had no time to listen to all the sessions this time. And I haven’t had knowledge of developing iOS. So There were some sessions I couldn’t make sense. Now, I’ve been developing an iOS app with Swift!