SafeSearch

Estimated reading: 1 minute 2156 views

This activity helps us to detect evocative or provocative content, such as adult content, violent content,weapons, and visually disturbing content in image. Beyond flagging an image based on presence of unsafe content, Amazon Rekognition also returns a hierarchical list of labels with confidence scores, to filter images based on your requirements.

Properties

INPUT

ImagePath:* Specify the path of the image file that has to be processed.

MISC

Display Name: Displays the name of the activity. You can also customize the activity name to help troubleshoot issues faster. This name will be used for logging purposes.

SkipOnError:It specifies whether to continue executing the workflow even if it throws an error. This supports only Boolean value “True or False”. By default, it is set to “False.”
True: Continues the workflow to the next step
False: Stops the workflow and throws an error.

Version: It specifies the version of the AmazonRekognition feature in use

OUTPUT

Output: This is not a mandatory field. However, to see the confidence the scores, declare a variable here.

Result: Declare a variable here to validate the activity. It accepts only Boolean value. This is not a mandatory field.

* Represents mandatory fields to execute the workflow.

Share this Doc

SafeSearch

Or copy link

CONTENTS