Blog

Codeless ML with TensorFlow and AI Platform

Advances in AI frameworks enable developers to create and deploy deep learning models with as little effort as clicking a few buttons on the screen. Using a UI or an API based on Tensorflow Estimators, models can be built and served without writing a single line of machine learning code.

0 ryyk3bja967khyim
Photo by Adi Goldstein on Unsplash

70 years ago, only a handful of experts knew how to create computer programs, because the process of programming required very high theoretical and technical specialization. Over the years, humans have created increasingly higher levels of abstraction and encapsulation of programming, allowing less-skilled personnel to create software with very basic tools (see Wix for example). The exact same process occurs these days with machine learning — only it advances extremely faster. In this blog post we will write down a simple script that will generate a full machine learning pipeline.

Truly codeless?

This post contains two types of code. The first is a SQL query to generate the dataset — this is the part of the code could be replaced by tools like Google Cloud Dataprep. The other type involves API calls using a Python client library — all of these actions are available through the AI platform UI. When I say codeless, I mean that at no point will you need to import TensorFlow or other ML libraries.

In this demo, I will use the Chicago Taxi Trips open dataset in Google BigQuery to predict the travel time of a taxi based on pickup location, desired drop-off, and the time of ride start. The model will be trained and deployed using Google Cloud services that wrap Tensorflow.

The entire code sample can be found in this GitHub repository.

Extract Features using BigQuery

Based on an EDA shown in this notebook, I created a SQL query to generate a training dataset:

In the repo, you will be able to see how I execute the query using a python client and export it to GCS.

Important! In order for the AI platform to build a model with this data, the first column must be the target variable and the CSV export should not contain a header.

Submit hyper-parameter tuning job and deploy

After I have my dataset containing a few hundred thousand rides, I define a simple neural network architecture based on the TensorFlow Estimator API, with parameters space to search. This specific spec will create a 3 hidden-layers neural network that solves a regression task (the expected trip time). It will launch 50 trials to search optimal settings for the learning rate, regularization factors, and maximum steps.

Provided the spec above I can use a Python client to launch a training job:

I use the API client to monitor the job run, and, when the job is done, I deploy and test the model.

With this, I completed the deployment of a machine learning pipeline using only API calls.

Get predictions

In order to get predictions, I load part of the test set records to the memory and send it to the deployed version for inference:

Want more stories? Check our blog, or follow Gad on Twitter.

Thanks to Adam Horowitz for technical advisory

Subscribe to updates, news and more.

Related blogs

Connect With Us