• Tidak ada hasil yang ditemukan

Deploying the neural network model and using it as an API

Dalam dokumen Java Deep Learning Cookbook (Halaman 98-107)

Deploying the neural network model and

Please refer to the following example for a concrete API implementation:

https:/​/​github.​com/​PacktPublishing/​Java-​Deep-​Learning-​Cookbook/​blob/​master/​03_

Building_​Deep_​Neural_​Networks_​for_​Binary_​classification/​sourceCode/​cookbookapp/

src/​main/​java/​com/​javadeeplearningcookbook/​api/​CustomerRetentionPredictionApi.

java

In our API example, we restore the model file (model that was persisted before) to generate predictions.

How to do it...

Create a method to generate a schema for the user input:

1.

private static Schema generateSchema(){

Schema schema = new Schema.Builder() .addColumnString("RowNumber")

.addColumnInteger("CustomerId") .addColumnString("Surname") .addColumnInteger("CreditScore") .addColumnCategorical("Geography",

Arrays.asList("France","Germany","Spain"))

.addColumnCategorical("Gender", Arrays.asList("Male","Female")) .addColumnsInteger("Age", "Tenure")

.addColumnDouble("Balance")

.addColumnsInteger("NumOfProducts","HasCrCard","IsActiveMember") .addColumnDouble("EstimatedSalary")

.build();

return schema;

}

Create a TransformProcess from the schema:

2.

private static RecordReader applyTransform(RecordReader recordReader, Schema schema){

final TransformProcess transformProcess = new TransformProcess.Builder(schema)

.removeColumns("RowNumber","CustomerId","Surname") .categoricalToInteger("Gender")

.categoricalToOneHot("Geography") .removeColumns("Geography[France]") .build();

final TransformProcessRecordReader transformProcessRecordReader = new TransformProcessRecordReader(recordReader,transformProcess);

return transformProcessRecordReader;

Load the data into a record reader instance:

3.

private static RecordReader generateReader(File file) throws IOException, InterruptedException {

final RecordReader recordReader = new CSVRecordReader(1,',');

recordReader.initialize(new FileSplit(file));

final RecordReader

transformProcessRecordReader=applyTransform(recordReader,generateSc hema());

Restore the model using ModelSerializer:

4.

File modelFile = new File(modelFilePath);

MultiLayerNetwork network =

ModelSerializer.restoreMultiLayerNetwork(modelFile);

NormalizerStandardize normalizerStandardize =

ModelSerializer.restoreNormalizerFromFile(modelFile);

Create an iterator to traverse through the entire set of input records:

5.

DataSetIterator dataSetIterator = new

RecordReaderDataSetIterator.Builder(recordReader,1).build();

normalizerStandardize.fit(dataSetIterator);

dataSetIterator.setPreProcessor(normalizerStandardize);

Design an API function to generate output from user input:

6.

public static INDArray generateOutput(File inputFile, String modelFilePath) throws IOException, InterruptedException { File modelFile = new File(modelFilePath);

MultiLayerNetwork network =

ModelSerializer.restoreMultiLayerNetwork(modelFile);

RecordReader recordReader = generateReader(inputFile);

NormalizerStandardize normalizerStandardize =

ModelSerializer.restoreNormalizerFromFile(modelFile);

DataSetIterator dataSetIterator = new

RecordReaderDataSetIterator.Builder(recordReader,1).build();

normalizerStandardize.fit(dataSetIterator);

dataSetIterator.setPreProcessor(normalizerStandardize);

return network.output(dataSetIterator);

}

For a further example, see: https:/​/​github.​com/​PacktPublishing/​Java-​Deep- Learning-​Cookbook/​blob/​master/​03_​Building_​Deep_​Neural_​Networks_​for_

Binary_​classification/​sourceCode/​cookbookapp/​src/​main/​java/​com/

javadeeplearningcookbook/​api/​CustomerRetentionPredictionApi.​java

Build a shaded JAR of your DL4J API project by running the Maven command:

7.

mvn clean install

Run the Spring Boot project included in the source directory. Import the Maven 8. project to your IDE: https:/​/​github.​com/​PacktPublishing/​Java-​Deep-

Learning-​Cookbook/​tree/​master/​03_​Building_​Deep_​Neural_​Networks_​for_

Binary_​classification/​sourceCode/​spring-​dl4j.

Add the following VM options in under run configurations:

-DmodelFilePath={PATH-TO-MODEL-FILE}

PATH-TO-MODEL-FILE is the location where you stored the actual model file. It can be on your local disk or in a cloud as well.

Then, run the SpringDl4jApplication.java file:

Test your Spring Boot app at http://localhost:8080/: 9.

Verify the functionality by uploading an input CSV file.

10.

Use a sample CSV file to upload into the web application: https:/​/​github.​com/

PacktPublishing/​Java-​Deep-​Learning-​Cookbook/​blob/​master/​03_​Building_

Deep_​Neural_​Networks_​for_​Binary_​classification/​sourceCode/​cookbookapp/

src/​main/​resources/​test.​csv.

The prediction results will be displayed as shown here:

How it works...

We need to create an API to take the inputs from end users and generate the output. The end user will upload a CSV file with the inputs, and API returns the prediction output back to the user.

In step 1, we added schema for the input data. User input should follow the schema structure in which we trained the model except that the Exited label is not added because that is the expected task for the trained model. In step 2, we have

created TransformProcess from Schema that was created in step 1.

In step 3, we used TransformProcess from step 2 to create a record reader instance. This is to load the data from the dataset.

We expect the end users to upload batches of inputs to generate outcomes. So, an iterator needs to be created as per step 5 to traverse through the entire set of input records. We set the preprocessor for the iterator using the pretrained model from step 4. Also, we used a batchSize value of 1. If you have more input samples, you can specify a reasonable batch size.

In step 6, we used a file path named modelFilePath to represent the model file location.

We pass this as a command-line argument from the Spring application. Thereby you can configure your own custom path where the model file is persisted. After step 7, a shaded JAR with all DL4J dependencies will be created and saved in the local Maven repository.

You can also view the JAR file in the project target repository.

Dependencies of customer retention API are added to the pom.xml file of the Spring Boot project, as shown here:

<dependency>

<groupId>com.javadeeplearningcookbook.app</groupId>

<artifactId>cookbookapp</artifactId>

<version>1.0-SNAPSHOT</version>

</dependency>

Once you have created a shaded JAR for the API by following step 7, the Spring Boot project will be able to fetch the dependencies from your local repository. So, you need to build the API project first before importing the Spring Boot project. Also, make sure to add the model file path as a VM argument, as mentioned in step 8.

In a nutshell, these are the steps required to run the use case:

Import and build the Customer Churn API project: https:/​/​github.​com/

1.

PacktPublishing/​Java-​Deep-​Learning-​Cookbook/​blob/​master/​03_​Building_

Deep_​Neural_​Networks_​for_​Binary_​classification/​sourceCode/​cookbookapp/

.​

Run the main example to train the model and persist the model file: https:/​/

2.

github.​com/​PacktPublishing/​Java-​Deep-​Learning-​Cookbook/​blob/​master/​03_

Building_​Deep_​Neural_​Networks_​for_​Binary_​classification/​sourceCode/

cookbookapp/​src/​main/​java/​com/​javadeeplearningcookbook/​examples/

CustomerRetentionPredictionExample.​java.​

Build the customer churn API project: https:/​/​github.​com/​PacktPublishing/

3.

Java-​Deep-​Learning-​Cookbook/​blob/​master/​03_​Building_​Deep_​Neural_

Networks_​for_​Binary_​classification/​sourceCode/​cookbookapp/​.​

Run the Spring Boot project by running the Starter here (with the earlier

4. mentioned VM arguments): https:/​/​github.​com/​PacktPublishing/​Java-​Deep- Learning-​Cookbook/​blob/​master/​03_​Building_​Deep_​Neural_​Networks_​for_

Binary_​classification/​sourceCode/​spring-​dl4j/​src/​main/​java/​com/

springdl4j/​springdl4j/​SpringDl4jApplication.​java.​

Building Convolutional Neural 4

Networks

In this chapter, we are going to develop a convolutional neural network (CNN) for an image classification example using DL4J. We will develop the components of our

application step by step while we progress through the recipes. The chapter assumes that you have read Chapter 1, Introduction to Deep Learning in Java, and Chapter 2, Data

Extraction, Transformation, and Loading, and that you have set up DL4J on your computer, as mentioned in Chapter 1, Introduction to Deep Learning in Java. Let's go ahead and discuss the specific changes required for this chapter.

For demonstration purposes, we will have classifications for four different species. CNNs convert complex images into an abstract format that can be used for prediction. Hence, a CNN would be an optimal choice for this image classification problem.

CNNs are just like any other deep neural network that abstracts the decision process and gives us an interface to transform input to output. The only difference is that they support other types of layers and different orderings of layers. Unlike other forms of input, such as text or CSV, images are complex. Considering the fact that each pixel is a source of

information, training will become resource intensive and time consuming for large numbers of high-resolution images.

In this chapter, we will cover the following recipes:

Extracting images from disk

Creating image variations for training data

Image preprocessing and the design of input layers Constructing hidden layers for a CNN

Constructing output layers for output classification Training images and evaluating CNN output Creating an API endpoint for the image classifier

Technical requirements

Implementation of the use case discussed in this chapter can be found here: https:/​/

github.​com/​PacktPublishing/​Java-​Deep-​Learning-​Cookbook/​tree/​master/​04_​Building_

Convolutional_​Neural_​Networks/​sourceCode.

After cloning our GitHub repository, navigate to the following directory: Java-Deep- Learning-Cookbook/04_Building_Convolutional_Neural_Networks/sourceCode. Then, import the cookbookapp project as a Maven project by importing pom.xml.

You will also find a basic Spring project, spring-dl4j, which can be imported as a Maven project as well.

We will be using the dog breeds classification dataset from Oxford for this chapter.

The principal dataset can be downloaded from the following link:

https:/​/​www.​kaggle.​com/​zippyz/​cats-​and-​dogs-​breeds-​classification-​oxford- dataset.

To run this chapter's source code, download the dataset (four labels only) from here:

https:/​/​github.​com/​PacktPublishing/​Java-​Deep-​Learning-​Cookbook/​raw/​master/​04_

Building%20Convolutional%20Neural%20Networks/​dataset.​zip (it can be found in the Java-Deep-Learning-Cookbook/04_Building Convolutional Neural

Networks/ directory).

Extract the compressed dataset file. Images are kept in different directories. Each directory represents a label/category. For demonstration purposes, we have used four labels.

However, you are allowed to experiment with more images from different categories in order to run our example from GitHub.

Note that our example is optimized for four species. Experimentation with a larger number of labels requires further network configuration optimization.

To leverage the capabilities of the OpenCV library in your CNN, add the following Maven dependency:

<dependency>

<groupId>org.bytedeco.javacpp-presets</groupId>

<artifactId>opencv-platform</artifactId>

<version>4.0.1-1.4.4</version>

</dependency>

We will be using the Google Cloud SDK to deploy the application in the cloud. For

instructions in this regard, refer to https:/​/​github.​com/​GoogleCloudPlatform/​app-​maven- plugin. For Gradle instructions, refer to https:/​/​github.​com/​GoogleCloudPlatform/​app- gradle-​plugin.

Dalam dokumen Java Deep Learning Cookbook (Halaman 98-107)