Face Analytics

Face Analytics



Face Analytics is a Catalyst Zia AI-driven service that performs facial detection in images, and implements advanced computational analysis on the detected faces to identify and predict the following attributes:

  • Coordinates of the face and the facial features
  • Smile detection
  • Age detection
  • Gender detection

Zia maps a series of interest points in a detected face and performs an in-depth analysis on their relative positions to generate the results for the attributes. It implements several AI algorithms for this purpose, and compares the detected localized landmarks with samples from machine learning datasets to arrive at predictions of the smile, age, and gender detections.

Face Analytics also provides the confidence levels of each attribute prediction that enables you to make informed decisions. Face Analytics can detect up to 10 faces in an image, and it provides predictions of the attributes for each detected face.

Catalyst provides Face Analytics in the Java and Node.js SDK packages, and you can integrate it in your Catalyst web or Android application. The Catalyst console provides easy access to code templates for these environments that you can implement in your application's code. You can also test Face Analytics using sample images in the console, and obtain the predictions of the attributes mentioned above, for the detected faces.

You can refer to the Java SDK documentation and Node.js SDK documentation for code samples of Face Analytics. Refer to the API documentation to learn about the API available for Face Analytics.


Key Concepts

Before you learn about the use cases and implementation of Face Analytics, it's important to understand its fundamental concepts in detail.

Facial Landmark Localization

Facial landmarking is the process of detecting and localizing specific key point characteristics on the face. Key points like corners of the eyes, endpoints of eyebrow arcs, endpoints and arc of the lip, tips of the nose, position of the cheek contours, and nostrils are commonly considered while processing age detection, gender detection, and emotion recognition.

Zia Face Analytics first provides the general coordinates of the location of the face in an image. Additionally, the coordinates of localized landmarks are also provided in three landmark localization modes:

  • Basic: This is a 0-point landmark detector that only detects the coordinates of the face [x1, y1, x2, y2].
    • x1, y1: Top left corner point of the face/face attribute/object
    • x2, y2: Bottom right corner point of the face/face attribute/object
  • Moderate: This is a 5-point landmark detector that detects the following:
    • Eyes: The center of both eyes
    • Nose: Nose tip
    • Lips: The center of both lips
  • Advanced: This is a 68-point landmark detector that detects the following:
    • Jawline: Face boundary
    • Eyebrows: Left and right eyebrow
    • Eyes: Left and right eye
    • Nose bridge
    • Nostril line
    • Upper lips: Upper and lower edge
    • Lower lips: Upper and lower edge

These details are provided for each detected face in the image individually in the response. Face Analytics also provides the confidence score of the determined face coordinates in the response.


Facial Emotion Detection

Facial emotion detection is the process of detecting human emotions based on facial expressions. Facial landmarking plays an important role in facial emotion detection. Key points such as the eye corners, mouth corners, eyebrows are considered to play a primary role when compared to secondary landmarks such as the chin or cheek contours, in detecting facial expressions and classifying them into an emotion class. The facial emotion detection technology is designed using deep learning methods, such as Convolutional Neural Network (CNN), that analyzes visual imagery.

Zia performs an in-depth analysis on a detected face and determine the presence of a smile on the face. The response contains the confidence scores for the aspects of smiling and not_smiling out of 1. Based on the aspect with the highest value, Face Analytics makes the prediction of the presence or absence of a smile.


Age Detection

Face Analytics predicts the age range of a face detected in an image. Age detection technology implements a similar approach of machine learning methods and AI algorithms to identify a series of interest points on a face and analyze the localized landmarks for particular signs applicable to specific age groups.

Face Analytics detects the age of a face as one of the following age ranges: 0-2, 3-9, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, more than 70 years old.

The response contains the confidence scores for each age range out of 1. Based on the age range with the highest confidence score, Face Analytics makes the final prediction.


Gender Detection

Face Analytics detects the gender of the faces by implementing deep learning algorithms to identify key characteristics associated with each specific gender. Certain localized points of interest in the face play a crucial role in determining the gender of a human. Zia compares the analyzed results with a vast array of datasets to arrive at a prediction of the gender.

Similar to the other attributes, Face Analytics contains the confidence scores for male and female out of 1. Based on the identified gender with the highest confidence score, Face Analytics makes the final prediction.


Input Format

Zia Face Analytics performs facial detection and attributes recognition by analyzing image files. Face Analytics supports the following input file formats:

  • .jpg/.jpeg
  • .png

You can code the Catalyst application to use the end user device's camera to capture photos and process the images as the input files. You could also provide a space for the users to upload image files from the device's memory to the Catalyst application, to generate the results.

The input provided using the API request contains the input image file, value for the landmark localization mode, and boolean values to specify if the emotion, age, and gender detection need to be performed. If you don't specify the mode or the required attributes to be detected, all attributes will be detected and the advanced mode will be implemented by default.

You can check the request format from the API documentation.

The user must follow these guidelines while providing the input, for better results:

  • Avoid providing blurred or corrupted images.
  • Ensure that the faces in the image are clear, visible, and distinct.
  • Do not upload images that have partial faces, silhouettes, side profiles, or other unrecognizable angles of the faces. The entire face must be visible in the image for better predictions. Face Analytics cannot detect faces that are tilted or oriented beyond a certain level.
  • Ensure that there are no textual content or watermarks present over the faces in the image.
  • The file size must not exceed 10 MB.

Response Format

Zia Face Analytics returns the response in the following ways:

  • In the console: When you upload a sample image with faces in the console, it will return the decoded data in two response formats:
    • Textual format: The textual response contains the presence of the face, smile, the detected gender, and the detected age range of each face with their confidence levels as percentage values.
    • JSON format: The JSON response contains the general coordinates of the faces, the coordinates of their localized landmarks, the confidence score for each aspect of the detected smile, age range, and gender of the faces, along with the predictions for each attribute. The confidence score of 0 to 1 can be equated to percentage values as follows:
      Confidence Level in percentageConfidence Score of values between 0 and 1
  • Using the SDKs: When you send an image file using an API request, you will receive only a JSON response containing the results in the format specified above.

You can check the JSON response format from the API documentation.



  1. Customized Results

    Face Analytics offers you the ability to enable or disable the attributes that you require the predictions for. You can enable or disable smile detection, age detection, or gender detection as per your needs. You can also select a landmark localization mode, and enable the coordinates prediction for the facial features that you require.
  2. Confidence Score for Each Prediction

    The confidence score provided for each prediction helps the user verify the level of accuracy of the prediction. The end user can analyze the confidence score and make informed decisions based on the accuracy of the result. The confidence score also helps them decide on providing better quality input for more accurate results.
  3. Accuracy of Results

    Zia is an AI-driven assistant that undergoes repeated systematic training to generate results with higher accuracy and a lower error margin. The AI is trained using various machine learning techniques to perform complex computations and analysis. The training model is highly vigorous, which means it studies and analyzes large volumes of data, and this ensures that the results generated are precise, accurate, and reliable.
  4. Rapid Performance

    Face Analytics generates results almost instantaneously when the image is uploaded. Catalyst ensures a high throughput of data transmission and a minimal latency in serving requests. The fast response time enhances your application's performance, and provides a satisfying experience for the end user.
  5. Seamless Integration

    You can easily implement Face Analytics in your application without having to learn the complex processing of the machine learning algorithms or the backend set-up. You can implement the ready-made code templates provided for the Java and Node.js platforms in any of your Catalyst applications that requires Face Analytics.
  6. Testing in the Console

    The testing feature in the console enables you to verify the efficiency of Face Analytics. You can upload sample images and view the results. This allows you to get an idea about the format and accuracy of the response that will be generated when you implement it in your application.

Use Cases

Face detection, age detection, emotion detection, and gender detection technologies are implemented in a wide range of applications and scenarios. The following are some use cases for Zia Face Analytics:

  • A job portal application requires the applicants to upload their own photographs for their online profile. The app implements Face Analytics to determine the age range and gender of the applicant using the photograph they upload. If the predicted age range and gender do not match with the age and gender details provided by the applicant, the app flags the image and requests the applicant to provide a different photograph or submit an identity proof for verification.
  • A security application that analyzes images captured and provided by a network of security cameras in a retail outlet implements Face Analytics to locate human presence using face detection coordinates, in case of break-ins, robberies or shop lifting. The application also processes multiple images to determine the gender and age range of the faces, to identify the demographics of the perpetrators better.

Face Analytics can also be implemented in the following scenarios:

  • Security application linked to a surveillance camera outside a pub or a restro-bar to detect and deny entry to minors
  • An application that analyzes images from events and gatherings to draw statistics on the customer demographics based on the attendees
  • A cyber security application that monitors the illegal usage and distribution of photographs of minors in social media platforms
  • An application that processes images captured by security cameras in a retail store to analyze customer satisfaction based on smile detection


This section only covers working with Face Analytics in the Catalyst console. Refer to the SDK and API documentation sections for implementing Face Analytics in your application's code.

As mentioned earlier, you can access the code templates that will enable you to integrate Face Analytics in your Catalyst application from the console, and also test the feature by uploading images with faces and obtaining the results.

Access Face Analytics

To access Face Analytics in your Catalyst console:

  1. Navigate to Zia Services under Discover, then click Access Now on the Face Analytics window.
  2. Click Try a Demo in the Face Analytics feature page.

    This will open the Face Analytics feature.

Test Face Analytics in the Catalyst Console

You can test Face Analytics by either selecting a sample image from Catalyst or by uploading your own image.

To scan a sample image with a face and view the result:

  1. Click Select a Sample Image in the box.
  2. Select an image from the samples provided.

    Face Analytics will scan the image for faces, analyze and predict the attributes, and provide a textual result of the analysis and the confidence level of each prediction in percentage values, for each detected face.

    The colors in the response bars indicate the range of the confidence percentage of a prediction such as, red: 0-30%, orange: 30-80%, green: 80-100%.
    The general coordinates and the coordinates of the localized landmarks of each face are provided, in addition to the other detections, in the JSON response in the advanced landmark localization mode by default. Click View Response to view the JSON response.

    You can refer to the API documentation to view a complete sample JSON response structure.

To upload your own image and test Face Analytics:

  1. Click Upload under the Result section.

    If you're opening Face Analytics after you have closed it, click Browse Files in this box.
  2. Upload a file from your local system.
    Note: The file must be in .jpg/.jpeg or .png format. The file size must not exceed 10 MB.
    The console will scan the image for faces and display the analysis of the four attributes of the faces detected.

    As mentioned earlier, Face Analytics can detect and analyze the attributes of up to 10 faces in an image. You can check the results of each detected face by clicking the side arrows. The JSON response also contains the results of each detected face.

Access Code Templates for Face Analytics

You can implement Face Analytics in your Catalyst application using the code templates provided by Catalyst for Java and Node.js platforms.

You can access them from the section below the test window. Click either the Java SDK or NodeJS SDK tab, and copy the code using the copy icon. You can paste this code in your web or Android application's code wherever you require.

You can process the input file as a new File in Java. The ZCFaceAnalyticsOptions module provides you with the option to enable or disable the analysis of each attribute. For example, if you set setAgeNeeded as false, Face Analytics will not provide the age prediction results for the faces detected in the image. You can also set the landmark localization mode as BASIC, MODERATE, or ADVANCED using setAnalyseMode.

In Node.js, the facePromise object is used to hold the input image file and the options set for it. You can specify the mode as basic, moderate, or advanced, and set an attribute as true or false to enable or disable the analysis for it.

Share this post : FacebookTwitter

Still can't find what you're looking for?

Write to us: support@zohocatalyst.com