~ Dr. Robert Ford.
Computer Vision, often abbreviated as CV, is defined as a field of study that seeks to develop techniques to help computers “see” and understand the content of digital images such as photographs and videos.[1]
Create and train a computer vision model to recognize different items of clothing using TensorFlow on Google Cloud Compute Engine.
The goal is for the model to figure out the relationship between the training data and its labels. Once training is complete, you want your model to see fresh images of clothing that resembles your training data and make predictions about what class of clothing they belong to.
This involved training a neural network to classify images of clothing from a dataset called Fashion MNIST. This dataset contains 70,000 items of clothing belonging to 10 different categories of clothing.
60,000 images were used to train the network and 10,000 images were used to evaluate how accurately the network learned to classify images.
model = tf.keras.models.Sequential
([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64,activation=tf.nn.relu),
tf.keras.layers.Dense(10,activation=tf.nn.softmax)])
The goal is to figure out the relationship between the training data and its labels.
model.compile(optimizer = tf.keras.optimizers.Adam(),
loss = tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
Loss indicates the model's performance by a number. If the model is performing better, loss will be a smaller number. Otherwise loss will be a larger number.
Notice the metrics= parameter. This allows TensorFlow to report on the accuracy of the training after each epoch by checking the predicted results against the known answers(labels). It basically reports back on how effectively the training is progressing.
One of the two arguments required for compiling a tf.keras model. An Optimizer is an algorithm that modifies the attributes of the neural network like weights and learning rate. This helps in reducing the loss and improving accuracy.
The model compile parameters worked with the following parameters:
Initially the neural network was about 89% accurate in classifying the training data. It figured out a pattern match between the image and the labels that worked 89% of the time.
Derive insights from your images in the cloud or at the edge with AutoML Vision or use pre-trained Vision API models to detect emotion, understand text, and more.
How to solve for unwanted or inappropriate image display on a web or mobile platform using an Image moderation AI APIs called Sightengine.
Establish an automated way to prevent the upload of unwanted / adult images & video to a social network application.
Sightengine is an Artificial Intelligence company that uses proprietary state-of-the-art Deep Learning systems to provide powerful image and video analysis through simple and clean APIs. They provide different services in the field of image recognition like:
Sightengine has brilliantly reduced the complexity of an AI detection system down to 2 lines of code. The first one initializes the engine, the second checks the image based on the parameter passed, in our case, we’ll focus on the nudity detection model.
The Nudity Detection Model determines if an image contains some level of nudity along with a description of the “level” of nudity.
These levels are based on the following criteria:
The probability score of each of these criteria is between 0 and 1, with 1 having the highest probability level while 0 having the lowest.
$sightEngine = new SightengineClient(env('SIGHTENGINEUSER'), env('SIGHTENGINEKEY'));
$imageCheck = $SightEngine->check(['nudity'])->set_file($localFilePath);
AutoML enables developers with limited machine learning expertise to train high-quality models specific to their business needs. Build your own custom machine learning model in minutes.
Use an Image dataset to train an AutoML model that classifies flowers dataset into the labels assigned. Label options include:
The image files used are from the flower dataset. These input images are stored in a public GCS bucket with a CSV file for data import. This file has two columns: the first column lists an image's URI in GCS, and the second column contains the image's label.
On the autoML console, “create dataset” and set the import file path to the CSV file.
gs://cloud-samples-data/ai-platform/flowers/flowers.csv.
The AutoML training method allows the user to train the model with minimal effort and ML expertise. Node budget was set to 8 hours and training took several hours and a notification was sent after. Training completed in 30 minutes with an accuracy score of 0.98
After the AutoML image classification model completed training, the next step was to create an endpoint and deploy the model to the endpoint.
Endpoints can be created from the evaluation tab on the training page. This was named “automl_image”. The Model settings accepted the traffic split of 100% and 1 node was deployed to serve the endpoint prediction.
After the model was deployed to this new endpoint, the next step was to send an image to the model for label prediction.
After the endpoint creation process finished, we sent a single image annotation (prediction) request in the console. This was done by using the “Test your model” section and uploading a picture for prediction.
This project demonstrates how to create a model for classifying content using Vertex AI. The project trains an AutoML model by using a corpus of crowd-sourced "happy moments" from the Kaggle open-source dataset HappyDB. The resulting model classifies happy moments into categories reflecting the causes of happiness.
The text files used are from the HappyDB dataset with 24,528 rows. Copy the data from the ml GCS bucket into your own bucket.
gsutil -m cp -R gs://cloud-ml-data/NL-classification/happiness.csv gs://${BUCKET}/text/
This file has two columns: the first column lists the happiness text in GCS, and the second column contains the text label.
On the autoML console, “create dataset” named “auto_ml_text_classification” set the “Data type” and “objective” with the “text” tab and select the radio button “text classification - single-label” and set the import file path to the CSV file : gs://paulkamau-lcm/text/happiness.csv
The AutoML training method allows the user to train the model with minimal effort and ML expertise. Node budget was set to 8 hours and training took several hours and a notification was sent after.
After the AutoML text classification model completed training, the next step was to create an endpoint and deploy the model to the endpoint.
Endpoints can be created from the evaluation tab on the training page. This was named “automl_text”.
After the endpoint creation process finished, we sent a single image annotation (prediction) request in the console. This was done by using the “Test your model” section and uploading a picture for prediction.
This project demonstrates how to create a model for classifying content using Vertex AI. The project trains an AutoML model by using a set of videos on GCS.
The text files used are from the HappyDB dataset with 24,528 rows. Copy the data from the ml GCS bucket into your own bucket.
gsutil -m cp -R gs://automl-video-demo-data/hmdb_split1_5classes_all.csv gs://auto-ml-tutorials/video/
This file has two columns: the first column lists the video source GCS, and the second column contains the video label.
On the autoML console, “create dataset” named “auto_ml_video_classification” set the “Data type” and “objective” with the “video” tab and select the radio button “video classification” and set the import file path to the CSV file : gs://auto-ml-tutorials/video/hmdb_split1_5classes_all.csv
After the AutoML video classification model completed training, the next step was to create an endpoint and deploy the model to the endpoint. The model average Precision & Recall were 100%. The Confusion matrix true label vs predicted label was 100%.
Endpoints can be created from the evaluation tab on the training page. This was named “automl_video”.
Model predictions will be done in batch format. The batch name, “demo_data_predictions” , source path & destination was set. The prediction results are stored on the GCS bucket.
In the results for the video annotation, Vertex AI provides three types of information:
Build a binary classification model from tabular data using Vertex AI.
The goal of the trained model is to predict whether a bank client will buy a term deposit (a type of investment) using features like age, income, and profession. This type of model can help banks determine who to focus its marketing resources on.
This tutorial uses the Bank marketing open-source dataset.
On the autoML console, “create dataset” and set the import file path to the CSV file.
gs://cloud-samples-data/ai-platform/flowers/flowers.csv.
The AutoML training method allows the user to train the model with minimal effort and ML expertise. Node budget was set to 8 hours and training took several hours and a notification was sent after.
After the AutoML image classification model completed training, the next step was to create an endpoint and deploy the model to the endpoint.
Endpoints can be created from the evaluation tab on the training page. This was named “automl_image”. The Model settings accepted the traffic split of 100% and 1 node was deployed to serve the endpoint prediction.
After the model was deployed to this new endpoint, the next step was to send an image to the model for label prediction.
After the endpoint creation process finished, we sent a single image annotation (prediction) request in the console. This was done by using the “Test your model” section and uploading a picture for prediction.
Explainable AI is a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models, natively integrated with a number of Google's products and services
BigQuery ML lets you create and execute machine learning models in BigQuery using standard SQL queries. BigQuery ML democratizes machine learning by letting SQL practitioners build models using existing SQL tools and skills. BigQuery ML increases development speed by eliminating the need to move data.
create a machine learning model inside of BigQuery to predict the fare of the cab ride given your model inputs and evaluate the performance of your model and make predictions with it.
Explore the financial transactions data for fraud analysis, apply feature engineering and machine learning techniques to detect fraudulent activities using BigQuery ML.
Use BigQuery to load the data from the Cloud Storage bucket, write and execute queries in BigQuery, analyze soccer event data. Then use BigQuery ML to train an expected goals model on the soccer event data and evaluate the impressiveness of World Cup goals.
Artificial Intelligence as a service, or AIaaS, is the on-demand delivery and use of AI and Deep learning capabilities towards an individual or business objective.
An introduction to Sensory ML with Google’s Pre-built AI
Getting started with AI made for human vision, speech, text, phonics, auditory & neural pathways..