Machine Learning @ WWDC 2020

Recaps, code samples, and resources for new Machine Learning announcements at WWDC 2020!

Ethan Saadia
2 min readJun 12, 2020

WWDC 2020 here, and Apple has upgraded its Machine Learning tools and platforms. I will be updating this article with summaries of the announcements, code samples, and tutorials with the latest technologies during the conference week!

New in Core ML

  1. Deploy models to apps with CloudKit
  2. Model encryption

New in Create ML

  1. Train video models
  2. Train style transfer models on both image and video

Core ML Tools

Core ML Tools are a Python library for creating machine learning models in the Core ML format, and it received a huge update this week. The tools make it possible to convert models from TensorFlow (both 1.x and 2.x) and PyTorch in just a few lines of code with a single Python package.

Here is how to convert a Keras model in TensorFlow 2.x to Core ML. The process is extremely simple and only takes a few lines of Python.

Image Inputs

If your model takes an image as an input, make sure to specifiy a ct.ImageType input so you can use the Vision API in your app, a framework that removes the boilerplate for working with image inputs and outputs.

import tensorflow as tfimport coremltools as ctmodel = tf.keras.applications.ResNet50()mlmodel = ct.convert(
model,
inputs=[ct.ImageType(bias=[-1,-1,-1], scale=1/127)]
)
mlmodel.save("resnet50.mlmodel")

Model Quantization

Since your models will likely be running on a mobile device, model size, latency, and power consumption are all important factors to improve performance. Model quantization is a technique that reduces the size of a model’s weights with small if not minimal reduction in accuracy. Core ML Tools uses the float32 type by default, but you can specify a custom number of bits to use instead. For example, the following code reduces our model size in half by using float16 , which also happens to be a new type in Swift 5.3!

from coremltools.models.neural_network import quantization_utils
small_mlmodel = quantization_utils.quantize_weights(
mlmodel,
nbits=16
)

That’s all the new ML items I’ve added so far, but keep checking back for more. Thank you!

--

--

Ethan Saadia

Founder at PhotoCatch. Developing 3D content creation tools for everyone. 2x  WWDC Scholar. Developer, Engineer, and Entrepreneur.