NSFW Scraper avatar
NSFW Scraper

Deprecated

Pricing

$2.50 / 1,000 results

Go to Store
NSFW Scraper

NSFW Scraper

Deprecated

Developed by

Datastorm

Datastorm

Maintained by Community

Check Images and Text for NSFW (not safe for work) content. Input can be an image url, a phrase or a dataset. Scan text for offensive, sexually explicit, self-harm, violence, racist or hate speech. Detect nudity, sexual activity, pornography, violence, gore content, etc in images.

0.0 (0)

Pricing

$2.50 / 1,000 results

1

Total users

6

Monthly users

4

Last modified

2 years ago

Check Images and Text for NSFW (not safe for work) content. Input can be an image url, a phrase or a dataset.

Text Content: Scan text for offensive, sexually explicit or suggestive content, it checks also if there is any content of self-harm, violence, racist or hate speech.

Image Scanning: Detect adult content in images, that is generally inappropriate for people under the age of 18 and includes nudity, sexual activity, pornography, violence, gore content, etc. (type of files allowed: jpg, jpeg, png, tiff)

Input

items List of items that contain a URL (for image scanning) or text content for text scanning.

Example:

"items":["http://google.com/image/image.jpg","I hate you Brad"]

datasets List of datasets to scan content for. Also include the fields in the datasets to scan.

Example:

[
{
"id": "serTsUnXrLOVXjPtd",
"fields": [
"description","productUrl"
]
}
]

Output

Output includes the following:

  • url (image url if image was checked)
  • item (text content if text was checked)
  • response
    • nsfw_likelihood (scale of 1 to 5. 5 is highest and means most unsafe)
    • items (list of labels that were returned)

Example

[{
"url": "https://google.com/images/imageToCheck.jpg",
"response": {
"status": "success",
"nsfw_likelihood": 5,
"items": [
{
"label": "Female Swimwear Or Underwear",
"likelihood": 5
},
{
"label": "Suggestive",
"likelihood": 5
},
{
"label": "Revealing Clothes",
"likelihood": 5
}
]
}
}]