Image Moderation

Image Moderation

 

Introduction

Image Moderation is a Catalyst Zia AI-driven service that monitors and recognizes inappropriate and unsafe content in images. It is a sub-feature of content moderation that decides if a particular image is safe for work, based on a pre-determined set of rules. Zia Image Moderation monitors for and recognizes the following criteria in an image:

  • Explicit nudity
  • Racy or Suggestive content
  • Bloodshed, gore, and violence
  • Drugs and substances
  • Weapons

Image Moderation is used to ensure that the user-generated content in your Catalyst applications does not violate the application standards and guidelines. You can maintain your brand reputation and general decorum by flagging, filtering, or automatically deleting the detected inappropriate content.

Catalyst provides Image Moderation in the Java and Node.js SDK packages, which you can integrate in your Catalyst web or Android application. The Catalyst console provides easy access to code templates for these environments that you can implement in your application's code.

You can also test Image Moderation using sample images in the console, and obtain the moderation results based on the attributes mentioned above. Image Moderation also provides a confidence score for each result that enables you to verify its accuracy and make informed decisions on the next steps to be taken.

You can refer to the Java SDK documentation and Node.js SDK documentation for code samples of Image Moderation. Refer to the API documentation to learn about the API available for Image Moderation.

 

Key Concepts

Before you learn about the use cases and implementation of Image Moderation, it's important to understand its fundamental concepts in detail.

Moderation Modes

Image Moderation enables you to select specific criteria to detect and flag during the moderation process. You can do this by specifying one of the three moderation modes in the input along with the image file.

The three moderation modes available are:

  • Basic: Detects nudity alone in an image.
  • Moderate: Detects nudity and racy content in an image.
  • Advanced: Detects all the supported criteria which are nudity, racy content, gore, drugs, weapons.

The accuracy levels of the moderation process varies with each moderation mode. The accuracy levels are as follows:

  • Basic: Can detect unsafe content with 98% accuracy
  • Moderate: Can detect unsafe content with 96% accuracy
  • Advanced: Can detect unsafe content with 93-95% accuracy

You can consider these accuracy levels and choose a moderation mode based on your use case, and enable moderation for those criteria alone.

 

Input Format

Zia Image Moderation moderates image files of the following input file formats:

  • .jpg/.jpeg
  • .png

You can implement Image Moderation in your application and enable input as you require, based on your use case. For example, you can automatically moderate image files uploaded by the end users of your application, and delete unwanted images in real-time.

Zia can detect instances of unsafe content better if they are visible and distinct in the image, or if they are not obstructed by textual content or watermarks.

The input provided using the API request contains the input image file, and the value for the moderation mode as basic, moderate, or advanced. If you don't specify the moderation mode, the advanced mode will be followed by default. The file size must not exceed 10 MB.

You can check the request format from the API documentation.

 

Response Format

Zia Image Moderation returns the response in the following ways:

  • In the console: When you upload a sample image with in the console, it will return the decoded data in two response formats:
    • Textual format:
      The textual response contains a list of the detected unsafe content, with the confidence levels of the detection as percentage values. It provides the prediction as Safe to Use or Unsafe to Use with a confidence percentage, based on the detected content. In the textual response, the supported criteria are grouped under the following categories:
      • Violence: Weapons
      • Suggestive: Explicit nudity, Revealing clothes
      • Substance Abuse: Drugs
      • Visually Disturbing: Blood
    • JSON format: The JSON response contains the probability of each criteria of the moderation mode in a value between 0 to 1, based on the detected content. The criteria in the JSON response are: racy, weapon, nudity, gore, and drug. It provides the prediction as safe_to_use or unsafe_to_use, with a confidence score of 0 to 1, based on the probabilities of all the criteria. The confidence score of 0 to 1 can be equated to percentage values as follows:
      Confidence Level in percentageConfidence Score of values between 0 and 1
      0-90.0
      3-90.0
      10-190.0
      20-290.0
      30-390.0
      40-490.01
      50-590.12
      60-690.23
      <700.63
  • Using the SDKs: When you send an image file using an API request, you will only receive a JSON response containing the results in the format specified above.

You can check the JSON response format from the API documentation.

 

Benefits

  1. Protect Community Users

    Image Moderation assists you in providing a safe and protected environment for your customers and application users. It helps you enforce compliance with legal standards, company policies, and general decorum. Catalyst enables you to maintain your brand and customer reputation by ensuring that images containing disturbing content like gore, substance abuse, pornography, and graphic adult content are not circulated in your application's platform.
  2. Customized and Accurate Results

    The moderation modes in Image Moderation provide flexibility in detecting instances of specific categories of unsafe content, based on your requirements. The results are also generated with low error margins, as Zia's training model is implemented with repeated systematic training using various machine learning techniques. Zia studies and analyzes large volumes of data to be able to perform complex analysis, ensuring that the results generated are precise, accurate, and reliable.
  3. Automatic Real-Time Monitoring

    Image Moderation enables you to perform real-time monitoring of the user generated content in your application. Catalyst saves the time and effort required in moderating content manually, by limiting or preventing human review. You can also process images with unsafe content in any way you need. For example, you can implement an additional manual review process, or code your application to delete the detected content automatically, and issue warnings to or terminate the accounts of the users that violate guidelines. This ensures that your application is monitored 24/7.
  4. Rapid Performance

    Image Moderation generates results instantaneously with a short turn-around time, as soon as the user-generated content is uploaded in your platform. Catalyst ensures a high throughput of data transmission, and a minimal latency in serving requests. The fast response time, state-of-the art infrastructure, and scalable resources ensure that it meets unanticipated spikes, and provides superior performance.
  5. Seamless Integration

    You can easily implement Image Moderation in your application without having to learn the complex processing of the algorithms or the backend set-up. You can implement the ready-made code templates provided for the Java and Node.js platforms in any of your Catalyst applications that requires Image Moderation.
  6. Testing in the Console

    The testing feature in the console enables you to verify the efficiency of Image Moderation. You can upload sample images and view the results. This allows you to get an idea about the format and accuracy of the response that will be generated when you implement it in your application.
 

Use Cases

Image Moderation is strongly required in applications that allow user-generated content to be freely circulated. The following are some use cases for Zia Image Moderation:

  • A social media application that enables users to post pictures in their profiles implements the moderate mode of Image Moderation to monitor the images about to be published, as soon as the users click "Upload". This monitoring process happens in the background, and Zia instantly detects images containing explicit nudity and racy content. The application's logic is coded to delete them automatically, and issue warnings to the users that uploaded the inappropriate content.
  • A website implements a child-friendly version, and requires the content distributed in it to be strictly monitored for all instances of gore, nudity, weapons, drugs, and racy content. It uses the advanced mode of Image Moderation to monitor and automatically delete all inappropriate content, to ensure a safe space for minors to use the website freely without parental guidance.

Image Moderation can also be implemented in the following scenarios:

  • Applications that implement graphic warning and image covering for instances that contain gore or nudity
  • Applications that enforce respect towards cultural and religious beliefs by preventing offensive images to be published
  • Apps created for professional and educational environments
  • Child-friendly apps and websites
  • Blogging or social media applications that enable users to upload content without admin review
  • Apps enforcing prevention of cyberbullying and pornographic material distribution
 

Implementation

This section only covers working with Image Moderation in the Catalyst console. Refer to the SDK and API documentation sections for implementing Image Moderation in your application's code.

As mentioned earlier, you can access the code templates that will enable you to integrate Image Moderation in your Catalyst application from the console, and also test the feature by uploading images and obtaining the results.

Access Image Moderation

To access Image Moderation in your Catalyst console:

  1. Navigate to Zia Services under Discover, then click Access Now on the Image Moderation window.
  2. Click Try a Demo in the Image Moderation feature page.

    This will open the Image Moderation feature.
 

Test Image Moderation in the Catalyst Console

You can test Image Moderation by either selecting a sample image from Catalyst or by uploading your own image.

To scan a sample image and view the result:

  1. Click Select a Sample Image in the box.
  2. Select an image from the samples provided.

    Image Moderation will scan the image for inappropriate content of all criteria in the advanced mode, and display the probability of each detected criteria as percentage values.

    The colors in the response bars indicate the safety of the image in the following way: red indicates that the image is unsafe to use, orange indicates that the image is partially safe to use, and green indicates that the image is safe to use.
    You can also view the complete JSON response, which includes the probability of each detected criteria, the prediction, and its confidence score. Click View Response to view the JSON response.

    You can refer to the API documentation to view a complete sample JSON response structure for each moderation mode.

To upload your own image and test Image Moderation:

  1. Click Upload under the Result section.

    If you're opening Image Moderation after you have closed it, click Browse Files in this box.
  2. Upload a file from your local system.
    Note: The file must be in .jpg/.jpeg or .png format. The file size must not exceed 10 MB.
    The console will scan the image for inappropriate content and display the results.

    You can click View Photo to uncover the image.

    You can view the JSON response as well in the same way.
 

Access Code Templates for Image Moderation

You can implement Image Moderation in your Catalyst application using the code templates provided by Catalyst for Java and Node.js platforms.

You can access them from the section below the test window. Click either the Java SDK or NodeJS SDK tab, and copy the code using the copy icon. You can paste this code in your web or Android application's code wherever you require.

You can process the input file as a new File in Java. The ZCImageModerationOptions module enables you to set the moderation mode as BASIC, MODERATE, or ADVANCED using setAnalyseMode.

In Node.js, the imPromise object is used to hold the input image file and the moderation mode set for it. You can specify the mode as basic, moderate, or advanced to process the image in required mode.

Share this post : FacebookTwitter

Still can't find what you're looking for?

Write to us: support@zohocatalyst.com