diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-inference-android-app/user-interface.md b/content/learning-paths/cross-platform/pytorch-digit-classification-inference-android-app/user-interface.md
new file mode 100644
index 000000000..e0f5e98ed
--- /dev/null
+++ b/content/learning-paths/cross-platform/pytorch-digit-classification-inference-android-app/user-interface.md
@@ -0,0 +1,292 @@
+---
+# User change
+title: "Create an Android App"
+
+weight: 3
+
+layout: "learningpathall"
+---
+
+In this section you will create an Android App to run digit classifier. The application will load a randomly selected image containing a handwritten digit, and its true label. Then you will be able to run an inference on this image to predict the digit.
+
+Start by creating a project and an user interface:
+1. Open Android Studio and create a new project with an “Empty Views Activity.”
+2. Set the project name to **ArmPyTorchMNISTInference**, set the package name to: **com.arm.armpytorchmnistinference**, select **Kotlin** as the language, and set the minimum SDK to **API 27 ("Oreo" Android 8.1)**.
+
+We set the API to Android 8.1 (API level 27) as it introduced NNAPI, providing a standard interface for running computationally intensive machine learning models on Android devices. Devices with ARM-based SoCs and corresponding hardware accelerators can leverage NNAPI to offload ML tasks to specialized hardware, such as NPUs (Neural Processing Units), DSPs (Digital Signal Processors), or GPUs (Graphics Processing Units).
+
+## User interface
+You will design the user interface to contain the following:
+1. A header.
+2. An ImageView and TextView to display the image and its true label.
+3. A button to load the image.
+4. A button to run inference.
+5. Two TextView controls to display the predicted label and inference time.
+
+To do so, replace the contents of activity_main.xml (located under src/main/res/layout) with the following code:
+
+```XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The provided XML code defines a user interface layout for an Android activity using a vertical LinearLayout. It includes several UI components arranged vertically with padding and centered alignment. At the top, there is a TextView acting as a header, displaying the text “Digit Recognition” in bold and with a large font size. Below the header, an ImageView is used to display an image, with a default source set to sample_image. This is followed by another TextView that shows the true label of the displayed image, initially set to “True Label: N/A”.
+
+The layout also contains two buttons: one labeled “Load Image” for selecting an input image, and another labeled “Run Inference” to execute the inference process on the selected image. At the bottom, there are two TextView elements to display the predicted label and the inference time, both initially set to “N/A”. The layout uses margins and appropriate sizes for each element to ensure a clean and organized appearance.
+
+## Add PyTorch to the project
+Before going further you will need to add PyTorch do the Android project. To do so, open the build.gradle.kts (Module:app) file and add the following two lines under dependencies:
+
+```XML
+implementation("org.pytorch:pytorch_android:1.10.0")
+implementation("org.pytorch:pytorch_android_torchvision:1.10.0")
+```
+
+The dependencies section should look as follows:
+```XML
+dependencies {
+ implementation(libs.androidx.core.ktx)
+ implementation(libs.androidx.appcompat)
+ implementation(libs.material)
+ implementation(libs.androidx.activity)
+ implementation(libs.androidx.constraintlayout)
+ testImplementation(libs.junit)
+ androidTestImplementation(libs.androidx.junit)
+ androidTestImplementation(libs.androidx.espresso.core)
+
+ implementation("org.pytorch:pytorch_android:1.10.0")
+ implementation("org.pytorch:pytorch_android_torchvision:1.10.0")
+}
+```
+
+## Logic implementation
+You will now implement the logic for the application. This will include loading the pre-trained model, loading and displaying images, and running the inference.
+
+Open the MainActivity.kt and modify it as follows:
+
+```Kotlin
+package com.arm.armpytorchmnistinference
+
+import android.graphics.Bitmap
+import android.graphics.BitmapFactory
+import android.os.Bundle
+import android.widget.Button
+import android.widget.ImageView
+import android.widget.TextView
+import androidx.activity.enableEdgeToEdge
+import androidx.appcompat.app.AppCompatActivity
+import org.pytorch.IValue
+import org.pytorch.Module
+import org.pytorch.Tensor
+import java.io.File
+import java.io.FileOutputStream
+import java.io.IOException
+import java.io.InputStream
+import kotlin.random.Random
+import kotlin.system.measureNanoTime
+
+class MainActivity : AppCompatActivity() {
+ private lateinit var imageView: ImageView
+ private lateinit var trueLabel: TextView
+ private lateinit var selectImageButton: Button
+ private lateinit var runInferenceButton: Button
+ private lateinit var predictedLabel: TextView
+ private lateinit var inferenceTime: TextView
+ private lateinit var model: Module
+ private var currentBitmap: Bitmap? = null
+ private var currentTrueLabel: Int? = null
+
+ override fun onCreate(savedInstanceState: Bundle?) {
+ super.onCreate(savedInstanceState)
+ enableEdgeToEdge()
+ setContentView(R.layout.activity_main)
+
+ // Initialize UI elements
+ imageView = findViewById(R.id.imageView)
+ trueLabel = findViewById(R.id.trueLabel)
+ selectImageButton = findViewById(R.id.selectImageButton)
+ runInferenceButton = findViewById(R.id.runInferenceButton)
+ predictedLabel = findViewById(R.id.predictedLabel)
+ inferenceTime = findViewById(R.id.inferenceTime)
+
+ // Load model from assets
+ model = Module.load(assetFilePath("model.pth"))
+
+ // Set up button click listener for selecting random image
+ selectImageButton.setOnClickListener {
+ selectRandomImageFromAssets()
+ }
+
+ // Set up button click listener for running inference
+ runInferenceButton.setOnClickListener {
+ currentBitmap?.let { bitmap ->
+ runInference(bitmap)
+ }
+ }
+ }
+
+ private fun selectRandomImageFromAssets() {
+ try {
+ // Get list of files in the mnist_bitmaps folder
+ val assetManager = assets
+ val files = assetManager.list("mnist_bitmaps") ?: arrayOf()
+
+ if (files.isEmpty()) {
+ trueLabel.text = "No images found in assets/mnist_bitmaps"
+ return
+ }
+
+ // Select a random file from the list
+ val randomFile = files[Random.nextInt(files.size)]
+ val inputStream: InputStream = assetManager.open("mnist_bitmaps/$randomFile")
+ val bitmap = BitmapFactory.decodeStream(inputStream)
+
+ // Extract the true label from the filename (e.g., 07_00.png -> true label is 7)
+ currentTrueLabel = randomFile.split("_")[0].toInt()
+
+ // Display the image and its true label
+ imageView.setImageBitmap(bitmap)
+ trueLabel.text = "True Label: $currentTrueLabel"
+
+ // Set the current bitmap for inference
+ currentBitmap = bitmap
+ } catch (e: IOException) {
+ e.printStackTrace()
+ trueLabel.text = "Error loading image from assets"
+ }
+ }
+
+ // Method to convert a grayscale bitmap to a float array and create a tensor with shape [1, 1, 28, 28]
+ private fun createTensorFromBitmap(bitmap: Bitmap): Tensor {
+ // Ensure the bitmap is in the correct format (grayscale) and dimensions [28, 28]
+ if (bitmap.width != 28 || bitmap.height != 28) {
+ throw IllegalArgumentException("Expected bitmap of size [28, 28], but got [${bitmap.width}, ${bitmap.height}]")
+ }
+
+ // Convert the grayscale bitmap to a float array
+ val width = bitmap.width
+ val height = bitmap.height
+ val floatArray = FloatArray(width * height)
+ val pixels = IntArray(width * height)
+ bitmap.getPixels(pixels, 0, width, 0, 0, width, height)
+
+ for (i in pixels.indices) {
+ // Normalize pixel values to [0, 1] range, assuming the grayscale image stores values in the R channel
+ floatArray[i] = (pixels[i] and 0xFF) / 255.0f
+ }
+
+ // Create a tensor with shape [1, 1, 28, 28] (batch size, channels, height, width)
+ return Tensor.fromBlob(floatArray, longArrayOf(1, 1, height.toLong(), width.toLong()))
+ }
+
+ private fun runInference(bitmap: Bitmap) {
+ // Convert bitmap to a float array and create a tensor with shape [1, 1, 28, 28]
+ val inputTensor = createTensorFromBitmap(bitmap)
+
+ // Run inference and measure time
+ val inferenceTimeMicros = measureTimeMicros {
+ // Forward pass through the model
+ val outputTensor = model.forward(IValue.from(inputTensor)).toTensor()
+ val scores = outputTensor.dataAsFloatArray
+
+ // Get the index of the class with the highest score
+ val maxIndex = scores.indices.maxByOrNull { scores[it] } ?: -1
+ predictedLabel.text = "Predicted Label: $maxIndex"
+ }
+
+ // Update inference time TextView in microseconds
+ inferenceTime.text = "Inference Time: $inferenceTimeMicros µs"
+ }
+
+ // Method to measure execution time in microseconds
+ private inline fun measureTimeMicros(block: () -> Unit): Long {
+ val time = measureNanoTime(block)
+ return time / 1000 // Convert nanoseconds to microseconds
+ }
+
+ // Helper function to get the file path from assets
+ private fun assetFilePath(assetName: String): String {
+ val file = File(filesDir, assetName)
+ assets.open(assetName).use { inputStream ->
+ FileOutputStream(file).use { outputStream ->
+ val buffer = ByteArray(4 * 1024)
+ var read: Int
+ while (inputStream.read(buffer).also { read = it } != -1) {
+ outputStream.write(buffer, 0, read)
+ }
+ outputStream.flush()
+ }
+ }
+ return file.absolutePath
+ }
+}
+```
+
+## Prepare model and data
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/Figures/01.png b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/Figures/01.png
new file mode 100644
index 000000000..97e2fb65f
Binary files /dev/null and b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/Figures/01.png differ
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/Figures/02.png b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/Figures/02.png
new file mode 100644
index 000000000..b76e2d056
Binary files /dev/null and b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/Figures/02.png differ
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_index.md b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_index.md
new file mode 100644
index 000000000..8a679fa2b
--- /dev/null
+++ b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_index.md
@@ -0,0 +1,44 @@
+---
+title: Use a PyTorch model for digit classification in Android App
+
+minutes_to_complete: 90
+
+who_is_this_for: This is an introductory topic for software developers interested in learning how to run inference of the pre-trained PyTorch model on Android.
+
+learning_objectives:
+ - Creating an Android app and loading the pre-trained mdoel.
+ - Preparing an input dataset.
+ - Measuring the inference time.
+
+prerequisites:
+ - Any computer which can run Python3, Visual Studio Code, and Android Studio this can be Windows, Linux, or macOS.
+
+author_primary: Dawid Borycki
+
+### Tags
+skilllevels: Introductory
+subjects: ML
+armips:
+ - Cortex-A
+ - Cortex-X
+ - Neoverse
+operatingsystems:
+ - Windows
+ - Linux
+ - macOS
+tools_software_languages:
+ - Android Studio
+ - Visual Studio Code
+ - Coding
+shared_path: true
+shared_between:
+ - servers-and-cloud-computing
+ - laptops-and-desktops
+ - smartphones-and-mobile
+
+### FIXED, DO NOT MODIFY
+# ================================================================================
+weight: 1 # _index.md always has weight of 1 to order correctly
+layout: "learningpathall" # All files under learning paths have this same wrapper
+learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
+---
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_next-steps.md b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_next-steps.md
new file mode 100644
index 000000000..4afb181fd
--- /dev/null
+++ b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_next-steps.md
@@ -0,0 +1,43 @@
+---
+# ================================================================================
+# Edit
+# ================================================================================
+
+next_step_guidance: >
+ Proceed to Get Started with Arm Performance Studio for mobile to continue learning about Android performance analysis.
+
+# 1-3 sentence recommendation outlining how the reader can generally keep learning about these topics, and a specific explanation of why the next step is being recommended.
+
+recommended_path: "/learning-paths/smartphones-and-mobile/ams/"
+
+# Link to the next learning path being recommended(For example this could be /learning-paths/servers-and-cloud-computing/mongodb).
+
+
+# further_reading links to references related to this path. Can be:
+ # Manuals for a tool / software mentioned (type: documentation)
+ # Blog about related topics (type: blog)
+ # General online references (type: website)
+
+further_reading:
+ - resource:
+ title: PyTorch
+ link: https://pytorch.org
+ type: documentation
+ - resource:
+ title: MNIST
+ link: https://en.wikipedia.org/wiki/MNIST_database
+ type: website
+ - resource:
+ title: Visual Studio Code
+ link: https://code.visualstudio.com
+ type: website
+
+
+
+# ================================================================================
+# FIXED, DO NOT MODIFY
+# ================================================================================
+weight: 21 # set to always be larger than the content in this path, and one more than 'review'
+title: "Next Steps" # Always the same
+layout: "learningpathall" # All files under learning paths have this same wrapper
+---
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_review.md b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_review.md
new file mode 100644
index 000000000..6fcd04e55
--- /dev/null
+++ b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/_review.md
@@ -0,0 +1,55 @@
+---
+# ================================================================================
+# Edit
+# ================================================================================
+
+# Always 3 questions. Should try to test the reader's knowledge, and reinforce the key points you want them to remember.
+ # question: A one sentence question
+ # answers: The correct answers (from 2-4 answer options only). Should be surrounded by quotes.
+ # correct_answer: An integer indicating what answer is correct (index starts from 0)
+ # explanation: A short (1-3 sentence) explanation of why the correct answer is correct. Can add additional context if desired
+
+
+review:
+ - questions:
+ question: >
+ How is the PyTorch model loaded in the Android app?
+ answers:
+ - Using Module.load(assetManager.open("model.pth")).
+ - By directly passing the model file path to Tensor.load().
+ - Using Module.load(assetFilePath("model.pth")).
+ - By copying the model file to the app’s external storage.
+ correct_answer: 3
+ explanation: >
+ The PyTorch model is loaded in the Android app using the Module.load() method, which takes the absolute file path of the model. The assetFilePath("model.pth") function copies the model from the assets directory to the internal storage and returns its path, which is required by Module.load().
+ - questions:
+ question: >
+ How is the data prepared before running inference on the PyTorch model?
+ answers:
+ - The bitmap image is converted to a tensor with a shape of [1, 3, 224, 224].
+ - The bitmap is resized and normalized to a tensor of shape [1, 1, 28, 28].
+ - The image is converted to grayscale and then reshaped to [1, 28, 28].
+ - The image is flattened into a one-dimensional array.
+ correct_answer: 2
+ explanation: >
+ Before running inference, the bitmap is converted to a float array and then to a tensor with a shape of [1, 1, 28, 28]. The 1 in the first dimension represents the batch size, the second 1 is the number of channels (grayscale), and 28, 28 are the height and width of the image.
+ - questions:
+ question: >
+ Which transformation is applied to the MNIST test images during data preparation in the app?
+ answers:
+ - Rotation and scaling.
+ - Conversion to a tensor with RGB normalization.
+ - Conversion to a tensor with normalization specific to grayscale images.
+ - Random cropping and flipping.
+
+ correct_answer: 3
+ explanation: >
+ During data preparation, the MNIST images are converted to tensors and normalized for grayscale images. This involves scaling the pixel values to a range of [-1, 1], which matches the input expectations of the pre-trained PyTorch model used in the app.
+
+# ================================================================================
+# FIXED, DO NOT MODIFY
+# ================================================================================
+title: "Review" # Always the same title
+weight: 20 # Set to always be larger than the content in this path
+layout: "learningpathall" # All files under learning paths have this same wrapper
+---
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/app.md b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/app.md
new file mode 100644
index 000000000..dcf315e68
--- /dev/null
+++ b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/app.md
@@ -0,0 +1,31 @@
+---
+# User change
+title: "Running an Application"
+
+weight: 5
+
+layout: "learningpathall"
+---
+
+You are now ready to run the application. You can use either an emulator or a physical device. In this guide, we will use an emulator.
+
+To run an app in Android Studio using an emulator, follow these steps:
+1. Configure the Emulator:
+* Go to Tools > Device Manager (or click the Device Manager icon on the toolbar).
+* Click Create Device to set up a new virtual device (if you haven’t done so already).
+* Choose a device model (e.g., Pixel 4) and click Next.
+* Select a system image (e.g., Android 11, API level 30) and click Next.
+* Review the settings and click Finish to create the emulator.
+
+2. Run the App:
+* Make sure the emulator is selected in the device dropdown menu in the toolbar (next to the “Run” button).
+* Click the Run button (a green triangle). Android Studio will build the app, install it on the emulator, and launch it.
+
+3. View the App on the Emulator: Once the app is installed, it will automatically open on the emulator screen, allowing you to interact with it as if it were running on a real device.
+
+Once the application is started, click the Load Image button. It will load a randomly selected image. Then, click Run Inference to recognize the digit. The application will display the predicted label and the inference time as shown below:
+
+![img](Figures/01.png)
+
+![img](Figures/02.png)
+
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/intro.md b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/intro.md
new file mode 100644
index 000000000..fd982a612
--- /dev/null
+++ b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/intro.md
@@ -0,0 +1,24 @@
+---
+# User change
+title: "Background"
+
+weight: 2
+
+layout: "learningpathall"
+---
+
+Running pre-trained machine learning models on mobile and edge devices has become increasingly common as it enables these devices to gain intelligence and perform complex tasks directly on-device. This capability allows smartphones, IoT devices, and embedded systems to execute advanced functions such as image recognition, natural language processing, and real-time decision-making without relying on cloud-based services. By leveraging on-device inference, applications can offer faster responses, reduced latency, enhanced privacy, and offline functionality, making them more efficient and capable of handling sophisticated tasks in various environments.
+
+Arm provides a wide range of hardware and software accelerators designed to optimize the performance of machine learning (ML) models on edge devices. These include specialized processors like Arm's Neural Processing Units (NPUs) and Graphics Processing Units (GPUs), as well as software frameworks like the Arm Compute Library and Arm NN, which are tailored to leverage these hardware capabilities. Arm's technology is ubiquitous, powering a vast array of devices from smartphones and tablets to IoT gadgets and embedded systems. With Arm chips being the core of many Android-based smartphones and other devices, running ML models efficiently on this hardware is crucial for enabling advanced applications such as image recognition, voice assistance, and real-time analytics. By utilizing Arm’s accelerators, developers can achieve lower latency, reduced power consumption, and enhanced performance, making on-device AI both practical and powerful for a wide range of applications.
+
+Running a machine learning model on Android involves a few key steps. First, you need to train and save the model in a mobile-friendly format, such as TensorFlow Lite, ONNX, or TorchScript, depending on the framework you are using. Next, you add the model file to your Android project’s assets directory. In your app’s code, use the corresponding framework’s Android library, such as TensorFlow Lite or PyTorch Mobile, to load the model. You then prepare the input data, ensuring it is formatted and preprocessed in the same way as during model training. The input data is passed through the model, and the output predictions are retrieved and interpreted accordingly. For improved performance, you can leverage hardware acceleration using Android’s Neural Networks API (NNAPI) or use GPU support if available. This process enables the Android app to make real-time predictions and execute complex machine learning tasks directly on the device.
+
+In this Learning Path, you will learn how to perform such inference in the Android app using a pre-trained digit classifier, created [here](learning-paths/cross-platform/pytorch-digit-classification-training).
+
+## Before you begin
+Before you begin make sure Python3, [Visual Studio Code](https://code.visualstudio.com/download) and [Android Studio](https://developer.android.com/studio/install) are installed on your system.
+
+## Source code
+The complete source code is available [here](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git).
+
+The Python scripts are available [here](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.Python.git)
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/prepare-data.md b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/prepare-data.md
new file mode 100644
index 000000000..99e6c2a74
--- /dev/null
+++ b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/prepare-data.md
@@ -0,0 +1,66 @@
+---
+# User change
+title: "Prepare Test Data"
+
+weight: 4
+
+layout: "learningpathall"
+---
+
+In this section you will add the pre-trained model and prepare the data for the application.
+
+## Model
+To add the model, start by creating the assets folder under app/src/main. Then simply copy the pre-trained model you created in this [Learning Path](learning-paths/cross-platform/pytorch-digit-classification-training). The model is also available in [this repository](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git)
+
+## Data
+To prepare the data, you use the following Python script:
+```Python
+from torchvision import datasets, transforms
+from PIL import Image
+import os
+
+# Constants
+NUM_DIGITS = 10 # Number of unique digits in MNIST (0-9)
+EXAMPLES_PER_DIGIT = 2 # Number of examples per digit
+
+# Define the transformation to convert the image to a tensor
+transform = transforms.Compose([transforms.ToTensor()])
+
+# Load the MNIST test dataset
+test_data = datasets.MNIST(
+ root="data",
+ train=False,
+ download=True,
+ transform=transform
+)
+
+# Create a directory to save the bitmaps
+os.makedirs("mnist_bitmaps", exist_ok=True)
+
+# Dictionary to keep track of collected examples per digit
+collected_examples = {digit: 0 for digit in range(NUM_DIGITS)}
+
+# Loop through the dataset and collect the required number of images
+for i, (image, label) in enumerate(test_data):
+ if collected_examples[label] < EXAMPLES_PER_DIGIT:
+ # Convert tensor to PIL image
+ pil_image = transforms.ToPILImage()(image)
+ # Create the filename with zero-padding
+ filename = f"mnist_bitmaps/{label:02d}_{collected_examples[label]:02d}.png"
+ # Save the image as PNG
+ pil_image.save(filename)
+ print(f"Saved: {filename}")
+
+ # Update the count for the current label
+ collected_examples[label] += 1
+
+ # Break the loop if all required examples are collected
+ if all(count == EXAMPLES_PER_DIGIT for count in collected_examples.values()):
+ break
+```
+
+The above code snippet processes the MNIST test dataset to generate and save bitmap images for digit classification. It defines constants for the number of unique digits (0-9) and the number of examples to collect per digit. The dataset is loaded using torchvision.datasets with a transformation to convert images to tensors. A directory named mnist_bitmaps is created to store the images. A dictionary tracks the number of collected examples for each digit. The code iterates through the dataset, converting each image tensor back to a PIL image, and saves two examples of each digit in the format digit_index_example_index.png. The loop breaks once the specified number of examples per digit is saved, ensuring that exactly 20 images (2 per digit) are generated and stored in the specified directory.
+
+For your convenience the data is included in [this repository](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git)
+
+Once you have a model and data simply copy them under the assets folder of the Android application
\ No newline at end of file
diff --git a/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/user-interface.md b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/user-interface.md
new file mode 100644
index 000000000..24bdb5e38
--- /dev/null
+++ b/content/learning-paths/smartphones-and-mobile/pytorch-digit-classification-inference-android-app/user-interface.md
@@ -0,0 +1,307 @@
+---
+# User change
+title: "Android App"
+
+weight: 3
+
+layout: "learningpathall"
+---
+
+In this section you will create an Android App to run digit classifier. The application will load a randomly selected image containing a handwritten digit, and its true label. Then you will be able to run an inference on this image to predict the digit.
+
+Start by creating a project and an user interface:
+1. Open Android Studio and create a new project with an “Empty Views Activity.”
+2. Set the project name to **ArmPyTorchMNISTInference**, set the package name to: **com.arm.armpytorchmnistinference**, select **Kotlin** as the language, and set the minimum SDK to **API 27 ("Oreo" Android 8.1)**.
+
+We set the API to Android 8.1 (API level 27) as it introduced NNAPI, providing a standard interface for running computationally intensive machine learning models on Android devices. Devices with ARM-based SoCs and corresponding hardware accelerators can leverage NNAPI to offload ML tasks to specialized hardware, such as NPUs (Neural Processing Units), DSPs (Digital Signal Processors), or GPUs (Graphics Processing Units).
+
+## User interface
+You will design the user interface to contain the following:
+1. A header.
+2. An ImageView and TextView to display the image and its true label.
+3. A button to load the image.
+4. A button to run inference.
+5. Two TextView controls to display the predicted label and inference time.
+
+To do so, replace the contents of activity_main.xml (located under src/main/res/layout) with the following code:
+
+```XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The provided XML code defines a user interface layout for an Android activity using a vertical LinearLayout. It includes several UI components arranged vertically with padding and centered alignment. At the top, there is a TextView acting as a header, displaying the text “Digit Recognition” in bold and with a large font size. Below the header, an ImageView is used to display an image, with a default source set to sample_image. This is followed by another TextView that shows the true label of the displayed image, initially set to “True Label: N/A”.
+
+The layout also contains two buttons: one labeled “Load Image” for selecting an input image, and another labeled “Run Inference” to execute the inference process on the selected image. At the bottom, there are two TextView elements to display the predicted label and the inference time, both initially set to “N/A”. The layout uses margins and appropriate sizes for each element to ensure a clean and organized appearance.
+
+## Add PyTorch to the project
+Before going further you will need to add PyTorch do the Android project. To do so, open the build.gradle.kts (Module:app) file and add the following two lines under dependencies:
+
+```XML
+implementation("org.pytorch:pytorch_android:1.10.0")
+implementation("org.pytorch:pytorch_android_torchvision:1.10.0")
+```
+
+The dependencies section should look as follows:
+```XML
+dependencies {
+ implementation(libs.androidx.core.ktx)
+ implementation(libs.androidx.appcompat)
+ implementation(libs.material)
+ implementation(libs.androidx.activity)
+ implementation(libs.androidx.constraintlayout)
+ testImplementation(libs.junit)
+ androidTestImplementation(libs.androidx.junit)
+ androidTestImplementation(libs.androidx.espresso.core)
+
+ implementation("org.pytorch:pytorch_android:1.10.0")
+ implementation("org.pytorch:pytorch_android_torchvision:1.10.0")
+}
+```
+
+## Logic implementation
+You will now implement the logic for the application. This will include loading the pre-trained model, loading and displaying images, and running the inference.
+
+Open the MainActivity.kt and modify it as follows:
+```Kotlin
+package com.arm.armpytorchmnistinference
+
+import android.graphics.Bitmap
+import android.graphics.BitmapFactory
+import android.os.Bundle
+import android.widget.Button
+import android.widget.ImageView
+import android.widget.TextView
+import androidx.activity.enableEdgeToEdge
+import androidx.appcompat.app.AppCompatActivity
+import org.pytorch.IValue
+import org.pytorch.Module
+import org.pytorch.Tensor
+import java.io.File
+import java.io.FileOutputStream
+import java.io.IOException
+import java.io.InputStream
+import kotlin.random.Random
+import kotlin.system.measureNanoTime
+
+class MainActivity : AppCompatActivity() {
+ private lateinit var imageView: ImageView
+ private lateinit var trueLabel: TextView
+ private lateinit var selectImageButton: Button
+ private lateinit var runInferenceButton: Button
+ private lateinit var predictedLabel: TextView
+ private lateinit var inferenceTime: TextView
+ private lateinit var model: Module
+ private var currentBitmap: Bitmap? = null
+ private var currentTrueLabel: Int? = null
+
+ override fun onCreate(savedInstanceState: Bundle?) {
+ super.onCreate(savedInstanceState)
+ enableEdgeToEdge()
+ setContentView(R.layout.activity_main)
+
+ // Initialize UI elements
+ imageView = findViewById(R.id.imageView)
+ trueLabel = findViewById(R.id.trueLabel)
+ selectImageButton = findViewById(R.id.selectImageButton)
+ runInferenceButton = findViewById(R.id.runInferenceButton)
+ predictedLabel = findViewById(R.id.predictedLabel)
+ inferenceTime = findViewById(R.id.inferenceTime)
+
+ // Load model from assets
+ model = Module.load(assetFilePath("model.pth"))
+
+ // Set up button click listener for selecting random image
+ selectImageButton.setOnClickListener {
+ selectRandomImageFromAssets()
+ }
+
+ // Set up button click listener for running inference
+ runInferenceButton.setOnClickListener {
+ currentBitmap?.let { bitmap ->
+ runInference(bitmap)
+ }
+ }
+ }
+
+ private fun selectRandomImageFromAssets() {
+ try {
+ // Get list of files in the mnist_bitmaps folder
+ val assetManager = assets
+ val files = assetManager.list("mnist_bitmaps") ?: arrayOf()
+
+ if (files.isEmpty()) {
+ trueLabel.text = "No images found in assets/mnist_bitmaps"
+ return
+ }
+
+ // Select a random file from the list
+ val randomFile = files[Random.nextInt(files.size)]
+ val inputStream: InputStream = assetManager.open("mnist_bitmaps/$randomFile")
+ val bitmap = BitmapFactory.decodeStream(inputStream)
+
+ // Extract the true label from the filename (e.g., 07_00.png -> true label is 7)
+ currentTrueLabel = randomFile.split("_")[0].toInt()
+
+ // Display the image and its true label
+ imageView.setImageBitmap(bitmap)
+ trueLabel.text = "True Label: $currentTrueLabel"
+
+ // Set the current bitmap for inference
+ currentBitmap = bitmap
+ } catch (e: IOException) {
+ e.printStackTrace()
+ trueLabel.text = "Error loading image from assets"
+ }
+ }
+
+ // Method to convert a grayscale bitmap to a float array and create a tensor with shape [1, 1, 28, 28]
+ private fun createTensorFromBitmap(bitmap: Bitmap): Tensor {
+ // Ensure the bitmap is in the correct format (grayscale) and dimensions [28, 28]
+ if (bitmap.width != 28 || bitmap.height != 28) {
+ throw IllegalArgumentException("Expected bitmap of size [28, 28], but got [${bitmap.width}, ${bitmap.height}]")
+ }
+
+ // Convert the grayscale bitmap to a float array
+ val width = bitmap.width
+ val height = bitmap.height
+ val floatArray = FloatArray(width * height)
+ val pixels = IntArray(width * height)
+ bitmap.getPixels(pixels, 0, width, 0, 0, width, height)
+
+ for (i in pixels.indices) {
+ // Normalize pixel values to [0, 1] range, assuming the grayscale image stores values in the R channel
+ floatArray[i] = (pixels[i] and 0xFF) / 255.0f
+ }
+
+ // Create a tensor with shape [1, 1, 28, 28] (batch size, channels, height, width)
+ return Tensor.fromBlob(floatArray, longArrayOf(1, 1, height.toLong(), width.toLong()))
+ }
+
+ private fun runInference(bitmap: Bitmap) {
+ // Convert bitmap to a float array and create a tensor with shape [1, 1, 28, 28]
+ val inputTensor = createTensorFromBitmap(bitmap)
+
+ // Run inference and measure time
+ val inferenceTimeMicros = measureTimeMicros {
+ // Forward pass through the model
+ val outputTensor = model.forward(IValue.from(inputTensor)).toTensor()
+ val scores = outputTensor.dataAsFloatArray
+
+ // Get the index of the class with the highest score
+ val maxIndex = scores.indices.maxByOrNull { scores[it] } ?: -1
+ predictedLabel.text = "Predicted Label: $maxIndex"
+ }
+
+ // Update inference time TextView in microseconds
+ inferenceTime.text = "Inference Time: $inferenceTimeMicros µs"
+ }
+
+ // Method to measure execution time in microseconds
+ private inline fun measureTimeMicros(block: () -> Unit): Long {
+ val time = measureNanoTime(block)
+ return time / 1000 // Convert nanoseconds to microseconds
+ }
+
+ // Helper function to get the file path from assets
+ private fun assetFilePath(assetName: String): String {
+ val file = File(filesDir, assetName)
+ assets.open(assetName).use { inputStream ->
+ FileOutputStream(file).use { outputStream ->
+ val buffer = ByteArray(4 * 1024)
+ var read: Int
+ while (inputStream.read(buffer).also { read = it } != -1) {
+ outputStream.write(buffer, 0, read)
+ }
+ outputStream.flush()
+ }
+ }
+ return file.absolutePath
+ }
+}
+```
+
+The above Kotlin code defines an Android app activity (MainActivity) that performs inference on the MNIST dataset using a pre-trained PyTorch model. The app allows the user to load a random MNIST image from the assets and run the model to classify the image.
+
+The MainActivity class contains several methods. The first one, onCreate(Bundle?) is called when the activity is first created. It sets up the user interface by inflating the layout defined in activity_main.xml and initializes several UI components, including an ImageView to display the image, TextView controls to show the true label and predicted label, and two buttons (selectImageButton and runInferenceButton) to select an image and run inference, respectively. The method then loads the PyTorch model from the assets folder using the assetFilePath function and sets up click listeners for the buttons. The selectImageButton is configured to select a random image from the mnist_bitmaps folder, while the runInferenceButton runs the inference on the selected image.
+
+Next, the selectRandomImageFromAssets() method is responsible for selecting a random image from the mnist_bitmaps folder in the assets. It lists all the files in the folder, picks one at random, and loads it as a Bitmap. The method then extracts the true label from the filename (e.g., 07_00.png implies a true label of 7), displays the selected image in the ImageView, and updates the trueLabel TextView with the correct label. If there is an error loading the image or the folder is empty, an appropriate error message is displayed in the trueLabel TextView.
+
+Afterward, the createTensorFromBitmap(Bitmap) converts a grayscale bitmap of size 28x28 (an image from the MNIST dataset) into a PyTorch Tensor. First, the method verifies that the bitmap has the correct dimensions. Then, it extracts pixel data from the bitmap, normalizes each pixel value to a float in the range [0, 1], and stores the values in a float array. The method finally constructs and returns a tensor with the shape [1, 1, 28, 28], where 1 is the batch size, 1 is the number of channels (for grayscale), and 28 represents the width and height of the image. This is required to match the input expected by the model.
+
+Subsequently, we have the runInference method. It accepts a Bitmap as input and performs inference using the pre-trained PyTorch model. It first converts the bitmap to a tensor using the createTensorFromBitmap method. Then, it measures the time taken to run the forward pass of the model using the measureTimeMicros method. The output tensor from the model, which contains the scores for each digit class, is processed to determine the predicted label. This predicted label is displayed in the predictedLabel TextView. The method also updates the inferenceTime TextView with the time taken for the inference in microseconds.
+
+Also, we have an inline function measureTimeMicros. It is a utility method that measures the execution time of the provided code block in microseconds. It uses the measureNanoTime function to get the execution time in nanoseconds and then converts it to microseconds by dividing the result by 1000. This method is used to measure the time taken for model inference in the runInference method.
+
+The assetFilePath method is a helper function that copies a file from the assets folder to the app’s internal storage and returns the absolute path of the copied file. This is necessary because PyTorch’s Module.load() method requires a file path, not an InputStream. The function reads the specified asset file, writes its contents to a file in the internal storage, and returns the path to this file. This method is used in onCreate to load the PyTorch model file (model.pth) from the assets.
+
+The MainActivity class initializes the UI components, loads a pre-trained PyTorch model, and allows the user to select random MNIST images and run inference on them. Each method is designed to handle a specific aspect of the functionality, such as loading images, converting them to tensors, running inference, and measuring execution time. The code is modular and organized, making it easy to understand and maintain.
+
+To be able to successfully run the application we will need to add the model and prepare bitmaps.
\ No newline at end of file