NAV
Shell Python

Introduction

BacklinkAPI banner

Welcome to the Backlink API documentation! By accessing our endpoints, you’ll get information on various domains/subdomains/backlinks/URLs in our database.

We have language bindings in Shell and Python! We’ll surely have some more soon:) On the right, you can see code examples and it is super easy to switch them using tabs in the top right.

Authentication

Authentication banner

To authorize, use this code:

import requests

r = requests.get('http://api_endpoint_here/', auth=('user', 'password'))
# With shell, you can just pass the correct credentials with each request
curl -u "user:password" "http://api_endpoint_here/"

Make sure to replace user and password with your credentials.

Backlink API uses basic authentication to allow access to the API.

Backlink API endpoints

BacklinkAPI endpoints banner

linktexts

import requests

r = requests.get('http://api_endpoint_here/linktexts?url=domaincrawler.com&limit=1&offset=0&order[0]=-anchortext', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/linktexts?url=domaincrawler.com&limit=1&offset=0&order[0]=-anchortext"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "percentage": 55.93,
      "refdoms": 18,
      "anchortext": "",
      "type": "MostlyImages",
      "frequency": 1340
    }
  ],
  "count": 108,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns anchortexts from links and frequency used.

HTTP Request

GET http://api_endpoint_here/linktexts?url=domaincrawler.com

Query Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -refdoms
-frequency
Sort results by given fields (in the provided order). Available values:
-anchortext
-frequency
-refdoms
anchortext
frequency
refdoms
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
anchortext
frequency
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=frequency gt 10
write
where=frequency%20gt%2010

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].percentage Percentage frequency of the anchor text in relation to all anchor texts
.response[].refdoms Number of referring domains with the respective anchor text
.response[].anchortext Anchor text
.response[].type Type of links: OnlyImages, OnlyText, MostlyImages, MostlyText
.response[].frequency Absolute total frequency of the anchor text (respectively “how often does it occur?")
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

linkedpages

import requests

r = requests.get('http://api_endpoint_here/linkedpages?url=domaincrawler.com&limit=1&offset=0&order[0]=-linkedpage', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/linkedpages?url=domaincrawler.com&limit=1&offset=0&order[0]=-linkedpage"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "linkedpage": "https://www.domaincrawler.com/",
      "percentage": 29.63,
      "refdoms": 2,
      "frequency": 710
    }
  ],
  "count": 53,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns a list of URL’s and their backlink stats.

HTTP Request

GET http://api_endpoint_here/linkedpages?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -refdoms
-frequency
Sort results by given fields (in the provided order). Available values:
-linkedpage
-frequency
-refdoms
linkedpage
frequency
refdoms
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
linkedpage
frequency
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=frequency gt 10
write
where=frequency%20gt%2010

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].linkedpage URL of the target page
.response[].percentage Percentage of links pointing to the target page in relation to all links pointing to the belonging domain
.response[].refdoms Number of referring domains pointing to the respective target page
.response[].frequency Number of external links pointing to the respective target page
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

tagtypes

import requests

r = requests.get('http://api_endpoint_here/tagtypes?url=domaincrawler.com&limit=1&offset=0&order[0]=-frequency', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/tagtypes?url=domaincrawler.com&limit=1&offset=0&order[0]=-frequency"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "tag": "a",
      "frequency": 1601
    }
  ],
  "count": 5,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns type of link (a, aimg, iframe, 301, 302) and frequency used.

HTTP Request

GET http://api_endpoint_here/tagtypes?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -tag Sort results by given fields (in the provided order). Available values:
-tag
-frequency
-refdoms
-tag
frequency
refdoms
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
tag
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=tag eq ‘301’
write
where=tag+eq+%27301%27

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].tag Tag name, e.g.: layer, ilayer, a, aimg, frame, iframe, area, 301, 302
.response[].frequency Absolute total frequency of the tag (respectively “how often does it occur?")
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code
import requests

r = requests.get('http://api_endpoint_here/anabacklinks?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/linkedpages?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "backlinks": 2398
    }
  ],
  "status": 200
}

This endpoint returns amount of backlinks.

HTTP Request

GET http://api_endpoint_here/anabacklinks?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[0].backlinks Number of referring links
.status HTTP Status Code
import requests

r = requests.get('http://api_endpoint_here/backlinks?url=domaincrawler.com&limit=1&offset=0&order[0]=-extL', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/backlinks?url=domaincrawler.com&limit=1&offset=0&order[0]=-extL"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "nofollow": false,
      "country": "Sweden",
      "nr": 1,
      "extL": 2,
      "insDate": "20190803",
      "city": "",
      "ip": "192.36.109.27",
      "intL": 16,
      "linktarget": "http://domaincrawler.com/",
      "title": "  Direktbetalning",
      "updDate": "20200213",
      "ccode": "SE",
      "blpcount": 1,
      "cdate": "20190803",
      "backlinkpage": "http://natbetalningar.se/direktbetalning/",
      "pos": 12,
      "sumL": 18,
      "linktext": "domaincrawler.com",
      "tag": "a",
      "udate": "20200213",
      "sr": 0
    }
  ],
  "count": 2396,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns a list of URL’s and their backlink stats.

HTTP Request

GET http://api_endpoint_here/backlinks?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -sr Sort results by given fields (in the provided order). Available values:
extL
-extL
intL
-intL
sr
-sr
pos
-pos
nofollow
-nofollow
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
nofollow
extL
city
country
ccode
ip
intL
linktarget
title
blpcount
backlinkpage
pos
sumL
linktext
tag
insDate
updDate
cdate
udate
sr
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=pos gt 10
write
where=pos%20gt%2010

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].backlinkpage URL on which the backlink was found
.response[].title Page title of the page on which the backlink was found
.response[].linktext Anchor text
.response[].extL Number of links on the backlink page that go to external pages
.response[].intL Number of links on the backlink page that go to internal pages
.response[].sumL Number of links on the backlink page
.response[].sr Backlink page IV Search Rank (Page Rank). IV calculates for each page the page strength from the weighted backlinks that point to this page. A page with many links is given a high IV Search Rank; a page with no or only a few links a low IV SR
.response[].linktarget Target URL of the link
.response[].tag Link type (layer, ilayer, a, aimg, frame, iframe, area, 301, 302, etc.)
.response[].nofollow Flag to mark if backlink is nofollow
.response[].pos All links on the backlink page are numbered in order of appearance. This number therefore reflects the position of the link on the backlink page. “1” is the first position. The higher the number is the lower is the position of the link
.response[].blpcount Number of backlinks from referring URL to the same domain
.response[].nr
.response[].dsCount
Row index number + offset
.response[].ip IP of the backlink page
.response[].ccode Country code of the server location of the backlink page
.response[].country Country name of the server location of the backlink page
.response[].city City of the server location of the backlink page
.response[].active Flag to mark if backlink is active
.response[].insDate
.response[].cdate
Date when the link was created in the database
.response[].updDate
.response[].update
Date of the update (recrawling)
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

domainpopularity

import requests

r = requests.get('http://api_endpoint_here/domainpopularity?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/domainpopularity?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "domainpopularity": 310
    }
  ],
  "status": 200
}

This endpoint returns amount of distinct referring domains.

HTTP Request

GET http://api_endpoint_here/domainpopularity?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[0].domainpopularity Number of referring domains
.status HTTP Status Code

ippopularity

import requests

r = requests.get('http://api_endpoint_here/ippopularity?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/ippopularity?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "ip": 315,
      "class_a": 108,
      "class_b": 105,
      "class_c": 102,
      "class_d": 0,
      "class_e": 0
    }
  ],
  "status": 200
}

This endpoint returns amount of distinct source ips and networks.

HTTP Request

GET http://api_endpoint_here/ippopularity?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[0].ip Number of referring IPs
.response[0].class_a Number of referring IPs with a different A-class
.response[0].class_b Number of referring IPs with a different B-class
.response[0].class_c Number of referring IPs with a different C-class
.response[0].class_d Number of referring IPs with a different D-class
.response[0].class_e Number of referring IPs with a different E-class
.status HTTP Status Code

linkratio

import requests

r = requests.get('http://api_endpoint_here/linkratio?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/linkratio?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "homelinks": 211
    },
    {
      "deeplinks": 2187
    }
  ],
  "status": 200
}

This endpoint returns backlink count to homepage and other pages

HTTP Request

GET http://api_endpoint_here/linkratio?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[0].homelinks Number of links pointing to the homepage of the given domain
.response[0].deeplinks Number of links pointing to subpages of the given domain (deeplinks)
.status HTTP Status Code

linkspread

import requests

r = requests.get('http://api_endpoint_here/linkspread?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/linkspread?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "follow": 2362
    },
    {
      "nofollow": 36
    }
  ],
  "status": 200
}

This endpoint returns follow/nofollow backlink count

HTTP Request

GET http://api_endpoint_here/linkspread?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[0].follow Number of follow links pointing to the given domain
.response[0].nofollow Number of nofollow links pointing to the given domain
.status HTTP Status Code

linktypes

import requests

r = requests.get('http://api_endpoint_here/linktypes?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/linktypes?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "linktype": "text",
      "links": 1512
    },
    {
      "linktype": "image",
      "links": 882
    }
  ],
  "count": 2,
  "status": 200
}

This endpoint returns backlink count grouped by link type text and image.

HTTP Request

GET http://api_endpoint_here/linktypes?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[].linktype Type of link - Text or Image
.response[].links Number of text or image links pointing to the given domain, subdomain or URL
.count Total number of items in the collection
.status HTTP Status Code

linktextdistribution

import requests

r = requests.get('http://api_endpoint_here/linktextdistribution?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/linktextdistribution?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "percentage": 8.03,
      "refdoms": 203,
      "anchortext": "domaincrawler.com",
      "frequency": 385
    },
    {
      "percentage": 3.53,
      "refdoms": 4,
      "anchortext": "domaincrawler",
      "frequency": 169
    },
    {
      "percentage": 2.92,
      "refdoms": 20,
      "anchortext": "domaincrawler.com",
      "frequency": 140
    },
    {
      "percentage": 2.04,
      "refdoms": 2,
      "anchortext": "Go to Control Panel",
      "frequency": 98
    },
    {
      "percentage": 2.04,
      "refdoms": 2,
      "anchortext": "Go to Marketplace",
      "frequency": 98
    },
    {
      "percentage": 1.84,
      "refdoms": 2,
      "anchortext": "Career",
      "frequency": 88
    },
    {
      "percentage": 1.04,
      "refdoms": 1,
      "anchortext": "Ir a Marketplace",
      "frequency": 50
    },
    {
      "percentage": 1.04,
      "refdoms": 1,
      "anchortext": "Ir al Panel de Control",
      "frequency": 50
    },
    {
      "percentage": 1.04,
      "refdoms": 1,
      "anchortext": "Marketplaceを見る",
      "frequency": 50
    },
    {
      "percentage": 1.04,
      "refdoms": 1,
      "anchortext": "Перейти в Marketplace",
      "frequency": 50
    }
  ],
  "count": 10,
  "status": 200
}

This endpoint returns top 9 used link texts and accumulated value for the rest.

HTTP Request

GET http://api_endpoint_here/linktextdistribution?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
order[] -refdoms
-frequency
Sort results by given fields (in the provided order). Available values:
-anchortext
-refdoms
-frequency
anchortext
refdoms
frequency
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
frequency
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=frequency gt 10
write
where=frequency%20gt%2010

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].percentage Percentage frequency of the anchor text in relation to all anchor texts
.response[].refdoms Number of referring domains with the respective anchor text
.response[].anchortext Anchor text
.response[].frequency Absolute total frequency of the anchor text (respectively “how often does it occur?")
.count Total number of items in the collection
.status HTTP Status Code

pagedistribution

import requests

r = requests.get('http://api_endpoint_here/pagedistribution?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/pagedistribution?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "linkedpage": "www.facebook.com/domaincrawler/",
      "percentage": 18.36,
      "refdoms": 0,
      "frequency": 38
    },
    {
      "linkedpage": "controlpanel.domaincrawler.com/",
      "percentage": 17.87,
      "refdoms": 0,
      "frequency": 37
    },
    {
      "linkedpage": "marketplace.domaincrawler.com/",
      "percentage": 17.87,
      "refdoms": 0,
      "frequency": 37
    },
    {
      "linkedpage": "www.linkedin.com/company/domaincrawler/",
      "percentage": 17.87,
      "refdoms": 0,
      "frequency": 37
    },
    {
      "linkedpage": "career.domaincrawler.com/",
      "percentage": 12.08,
      "refdoms": 0,
      "frequency": 25
    },
    {
      "linkedpage": "domaincrawler.teamtailor.com/",
      "percentage": 7.25,
      "refdoms": 0,
      "frequency": 15
    },
    {
      "linkedpage": "wordpress.org/",
      "percentage": 4.35,
      "refdoms": 0,
      "frequency": 9
    },
    {
      "linkedpage": "www.instagram.com/domaincrawler/",
      "percentage": 1.45,
      "refdoms": 0,
      "frequency": 3
    },
    {
      "linkedpage": "www.youtube.com/channel/UC-v9vX_UdXtkB5-K8rQ1V_A",
      "percentage": 1.45,
      "refdoms": 0,
      "frequency": 3
    },
    {
      "linkedpage": "support.domaincrawler.com/",
      "percentage": 0.48,
      "refdoms": 0,
      "frequency": 1
    },
    {
      "linkedpage": "others",
      "percentage": 0.97,
      "refdoms": 0,
      "frequency": 2
    }
  ],
  "count": 11,
  "status": 200
}

This endpoint returns top 9 used link targets and accumulated value for the rest.

HTTP Request

GET http://api_endpoint_here/pagedistribution?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
order[] -refdoms
-frequency
Sort results by given fields (in the provided order). Available values:
-linkedpage
-refdoms
-frequency
linkedpage
refdoms
frequency
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
frequency
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=frequency gt 10
write
where=frequency%20gt%2010

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].linkedpage URL of the target page
.response[].percentage Percentage frequency of the anchor text in relation to all anchor texts
.response[].refdoms Number of referring domains with the respective anchor text
.response[].frequency Absolute total frequency of the anchor text (respectively “how often does it occur?")
.count Total number of items in the collection
.status HTTP Status Code
import requests

r = requests.get('http://api_endpoint_here/edugovlinks?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/edugovlinks?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "refdoms": 2,
      "links": 2,
      "type": "edu"
    },
    {
      "refdoms": 1,
      "links": 1,
      "type": "gov"
    }
  ],
  "count": 2,
  "status": 200
}

This endpoint returns .edu and .gov links/refdoms count.

HTTP Request

GET http://api_endpoint_here/edugovlinks?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[].type Type of link that is counted. “edu” for .edu domains “gov” for .gov domains
.response[].refdoms Number of domains with the respective type pointing to the given domain
.response[].links Number of links from domains with the respective type pointing to the given domain
.count Total number of items in the collection
.status HTTP Status Code

referringdomains

import requests

r = requests.get('http://api_endpoint_here/referringdomains?url=domaincrawler.com&limit=1&offset=0&order[0]=-sr', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/referringdomains?url=domaincrawler.com&limit=1&offset=0&order[0]=-sr"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "dsCount": 1,
      "ccode": "DE",
      "domain": "filezilla-project.org",
      "ip": "136.243.154.86",
      "links": 1553,
      "ip_count_additional": 0,
      "sr": 4.86
    }
  ],
  "count": 337,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns a list of referring domains.

HTTP Request

GET http://api_endpoint_here/referringdomains?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -links Sort results by given fields (in the provided order). Available values:
domain
-domain
sr
-sr
links
-links
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
ccode
ip
links
sr
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=sr gt 8
write
where=sr%20gt%208

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].ccode Country code of the server location of the referring domain
.response[].domain Referring domain name
.response[].links Number of links from referring domain
.response[].ip IP of the referring domain
.response[].ip_count_additional Number of additional IPs of referring domain
.response[].sr IV Search Rank (Page Rank)
.response[].dsCount Row index number + offset
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

iplist

import requests

r = requests.get('http://api_endpoint_here/iplist?url=domaincrawler.com&limit=1&offset=0&order[0]=links', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/iplist?url=domaincrawler.com&limit=1&offset=0&order[0]=links"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "ccode": "DE",
      "country": "Hebertsfelden",
      "city": "",
      "ip": "136.243.154.86",
      "percentage": 32.39,
      "links": 1553
    }
  ],
  "count": 340,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns a list of referring IPs.

HTTP Request

GET http://api_endpoint_here/iplist?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -links Sort results by given fields (in the provided order). Available values:
ip
-ip
ccode
-ccode
country
-country
city
-city
links
-links
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
ccode
country
city
ip
links
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=links gt 8
write
where=links%20gt%20100

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].ccode Country code of the server location
.response[].country Country name of the server location
.response[].city City of the server location
.response[].ip IP address
.response[].percentage Percentage of links pointing to the target page in relation to all links pointing to the belonging IP
.response[].links Number of links
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

tldlist

import requests

r = requests.get('http://api_endpoint_here/tldlist?url=domaincrawler.com&limit=1&offset=0&order[0]=links', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/tldlist?url=domaincrawler.com&limit=1&offset=0&order[0]=links"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "diffdomains": 13,
      "toplevel": "org",
      "percentage": 55.53,
      "links": 2662
    }
  ],
  "count": 23,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns a list of referring TLDs.

HTTP Request

GET http://api_endpoint_here/tldlist?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -links Sort results by given fields (in the provided order). Available values:
toplevel
-toplevel
links
-links
diffdomains
-diffdomains
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
links
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=links gt 8
write
where=links%20gt%20100

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].toplevel Top level domain name
.response[].diffdomains Number of domains to the belonging toplevel
.response[].percentage Percentage of links pointing to the target page in relation to all links pointing to the belonging toplevel
.response[].links Number of links
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

geolocationshort

import requests

r = requests.get('http://api_endpoint_here/geolocationshort?url=domaincrawler.com&limit=1&offset=0&order[0]=-links', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/geolocationshort?url=domaincrawler.com&limit=1&offset=0&order[0]=-links"

The above command returns JSON structured like this:

{
  "response": [
    {
      "ccode": "SE",
      "country": "Sweden",
      "percentage": 34.53,
      "links": 828
    }
  ],
  "count": 14,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns backlink count grouped by source country

HTTP Request

GET http://api_endpoint_here/geolocationshort?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -links Sort results by given fields (in the provided order). Available values:
ccode
-ccode
country
-country
links
-links

Response Description

Key Description
.response[].ccode Country code of the server location
.response[].country Country name of the server location
.response[].percentage Percentage of links pointing to the target page in relation to all links pointing to the belonging country code
.response[].links Number of links
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code
import requests

r = requests.get('http://api_endpoint_here/backlinks-histogram?url=domaincrawler.com&from=20200101&to=20211201', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/backlinks-histogram?url=domaincrawler.com&from=20200101&to=20211201"

The above command returns JSON structured like this:

{
  "calendar": [
    20200110,
    20200114
  ],
  "response": [
    {
      "date": 20200110,
      "urls": 2075,
      "domains": 216
    },
    {
      "date": 20200114,
      "urls": 14,
      "domains": 5
    }
  ],
  "count": 2,
  "status": 200
}

This endpoint returns time series list for discovered referring domains and backlinks for the given period of time.

HTTP Request

GET http://api_endpoint_here/backlinks-histogram?url=domaincrawler.com&from=20200101&to=20211201

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
from
(required)
None The range start date, in the format yyyyMMdd.
to current date The end date of the range.

Response Description

Key Description
.calendar[] The dates of all known data points
.response[].date Date when the data point was written
.response[].domains The number of referring domains
.response[].urls The number of referring links
.count Total number of items in the collection
.status HTTP Status Code

backlinkshort

import requests

r = requests.post('http://api_endpoint_here/backlinkshort', json=['domaincrawler.com', 'facebook.com'], auth=('user', 'password'))
print(r.json())
# or
r = requests.get('http://api_endpoint_here/backlinkshort?urls[0]=domaincrawler.com&urls[1]=facebook.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/backlinkshort" -H 'Content-type: application/json' -d '["domaincrawler.com", "facebook.com"]'
# or
curl -u "user:password" "http://api_endpoint_here/backlinkshort?urls[0]=domaincrawler.com&urls[1]=facebook.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "backlinks": 17392272310,
      "refdoms": 1271313,
      "url": "facebook.com",
      "sr": 9.49
    },
    {
      "backlinks": 2399,
      "refdoms": 310,
      "url": "domaincrawler.com",
      "sr": 2.78
    }
  ],
  "count": 2,
  "status": 200
}

This endpoint returns IV Search Rank (Page Rank), amount of refdoms and backlinks for the given domain list.

HTTP Request

POST http://api_endpoint_here/backlinkshort
GET http://api_endpoint_here/backlinkshort?urls[]=domaincrawler.com

URL Parameters (only in GET request)

Parameter Default Description
urls[] None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[].url URL, host or domain from the GET/POST data
.response[].refdoms Number of referring domains
.response[].backlinks Number of referring links
.response[].sr IV Search Rank of the queried domain

newandlost-histogram

import requests

r = requests.get('http://api_endpoint_here/newandlost-histogram?url=domaincrawler.com&from=20200101&to=20211201', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/newandlost-histogram?url=domaincrawler.com&from=20200101&to=20211201"

The above command returns JSON structured like this:

{
  "calendar": [
    20200916,
    20200917
  ],
  "response": [
    {
      "date": 20200916,
      "urls": {
        "new": 37,
        "lost": 36
      },
      "domains": {
        "new": 12,
        "lost": 30
      }
    },
    {
      "date": 20200917,
      "urls": {
        "new": 37,
        "lost": 36
      },
      "domains": {
        "new": 12,
        "lost": 30
      }
    }
  ],
  "count": 2,
  "status": 200
}

This endpoint returns time series list for the new and lost referring domains and backlinks for the given period of time.

HTTP Request

GET http://api_endpoint_here/newandlost-histogram?url=domaincrawler.com&from=20200101&to=20211201

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
from
(required)
None The range start date, in the format yyyyMMdd.
to current date The end date of the range.

Response Description

Key Description
.calendar[] The dates of all known data points
.response[].date Date when the data point was written
.response[].urls.new The number of new backlinks
.response[].urls.lost The number of lost backlinks
.response[].domains.new The number of new referring domains
.response[].domains.lost The number of lost referring domains
.count Total number of items in the collection
.status HTTP Status Code

outgoinghosts

import requests

r = requests.get('http://api_endpoint_here/outgoinghosts?url=domaincrawler.com&limit=1&offset=0&order[0]=-extL', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/outgoinghosts?url=domaincrawler.com&limit=1&offset=0&order[0]=-extL"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "host": "www.facebook.com",
      "frequency": 38
    }
  ],
  "count": 11,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns outgoing host names and their amount of links.

HTTP Request

GET http://api_endpoint_here/outgoinghosts?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -frequency Sort results by given fields (in the provided order). Available values:
frequency
-frequency
host
-host
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
frequency
host
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=frequency gt 10
write
where=frequency%20gt%2010

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].host Host name from the target domain
.response[].frequency Absolute total frequency of the host (respectively “how often does it occur?")
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

outgoinglinktexts

import requests

r = requests.get('http://api_endpoint_here/outgoinglinktexts?url=domaincrawler.com&limit=1&offset=0&order[0]=-linktext', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/outgoinglinktexts?url=domaincrawler.com&limit=1&offset=0&order[0]=-linktext"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "linktext": "Facebook",
      "frequency": 37
    }
  ],
  "count": 17,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns outgoing linktexts.

HTTP Request

GET http://api_endpoint_here/outgoinglinktexts?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -frequency Sort results by given fields (in the provided order). Available values:
frequency
-frequency
linktext
-linktext
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
frequency
linktext
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=frequency gt 10
write
where=frequency%20gt%2010

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].linktext Anchor text of the link from the target domain
.response[].frequency Absolute total frequency of the anchor text (respectively “how often does it occur?")
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

outgoinglinkcount

import requests

r = requests.get('http://api_endpoint_here/outgoinglinkcount?url=domaincrawler.com&order[0]=-linktype', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/outgoinglinkcount?url=domaincrawler.com&order[0]=-linktype"

The above command returns JSON structured like this:

{
  "response": [
    {
      "avg": 19,
      "pages": 35,
      "linktype": "all",
      "frequency": 681
    },
    {
      "avg": 5,
      "pages": 35,
      "linktype": "extLinks",
      "frequency": 206
    },
    {
      "avg": 13,
      "pages": 35,
      "linktype": "intLinks",
      "frequency": 475
    }
  ],
  "count": 2,
  "status": 200
}

This endpoint returns frequency and average number of intL/extL/sumL and total pages count.

HTTP Request

GET http://api_endpoint_here/outgoinglinkcount?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] linktype Sort results by given fields (in the provided order). Available values:
frequency
-frequency
linktype
-linktype

Response Description

Key Description
.response[].avg Average number of internal, external or all outgoing links
.response[].pages Average number pages
.response[].linktype Type of links, e.g: intLinks, extLinks, all
.response[].frequency Total number of intLinks, extLinks, all links
.count Total number of items in the collection
.status HTTP Status Code

outgoingtoppages

import requests

r = requests.get('http://api_endpoint_here/outgoingtoppages?url=domaincrawler.com&limit=3&offset=0&order[0]=-extLinks', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/outgoingtoppages?url=domaincrawler.com&limit=3&offset=0&order[0]=-extLinks"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "webpage": "domaincrawler.com/about-us/",
      "extLinks": 7
    },
    {
      "webpage": "domaincrawler.com/",
      "extLinks": 6
    },
    {
      "webpage": "domaincrawler.com/brand-protection/",
      "extLinks": 6
    }
  ],
  "count": 38,
  "range": {
    "offset": 0,
    "limit": 3
  },
  "status": 200
}

This endpoint returns a list of pages with most external links.

HTTP Request

GET http://api_endpoint_here/outgoingtoppages?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -extLinks Sort results by given fields (in the provided order). Available values:
extLinks
-extLinks
webpage
-webpage

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].webpage URL of the outgoing link
.response[].extLinks Number of the links
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code
import requests

r = requests.get('http://api_endpoint_here/outgoinglinks?url=domaincrawler.com&limit=1&offset=0&order[0]=-extL', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/outgoinglinks?url=domaincrawler.com&limit=1&offset=0&order[0]=-extL"

The above command returns JSON structured like this:

{
  "exportFinished": true,
  "response": [
    {
      "backlinkpage": "domaincrawler.com/brand-sites/",
      "title": "Brand Sites | domaincrawler",
      "linktarget": "www.linkedin.com/company/3861083",
      "linktext": "",
      "tag": "aimg",
      "nofollow": false,
      "pos": 18,
      "sr": 0,
      "cdate": 20191122
    }
  ],
  "count": 170,
  "range": {
    "offset": 0,
    "limit": 1
  },
  "status": 200
}

This endpoint returns a list of outgoing links.

HTTP Request

GET http://api_endpoint_here/outgoinglinks?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
offset 0 Start returning data after offset.
order[] -cdate Sort results by given fields (in the provided order). Available values:
cdate
-cdate
sr
-sr
where None Define filter for query. Structure is: <fieldname> <operation> <value>. For multiple conditions they need to be concatenated using AND. Only the next fields are allowed:
title
backlinkpage
linktarget
linktext
nofollow
pos
sr
tag
cdate
Possible operations are: lt, le, gt, ge, eq, ne, notlike, like, in. Sting values need to be singlequoted. Special characters (spaces, quotes, singlequotes etc) should be encoded, e.g.:
instead of
where=pos gt 10
write
where=pos%20gt%2010

Response Description

Key Description
.exportFinished Flag that indicates whether all data was exported into the search index
.response[].backlinkpage URL on which the outgoing link was found
.response[].title Page title of the page on which the outgoing link was found
.response[].linktarget Target URL of the link
.response[].linktext Anchor text
.response[].tag Link type (layer, ilayer, a, aimg, frame, iframe, area, 301, 302, etc.)
.response[].nofollow Flag to mark if backlink is nofollow
.response[].pos All outgoing links on the backlink page are numbered in order of appearance.
.response[].sr IV Search Rank (Page Rank)
.response[].cdate Date when the outgoing link was created in the database
.range.offset The offset of the first item in the collection to return
.range.limit The maximum number of entries to return
.count Total number of items in the collection
.status HTTP Status Code

anaurls

import requests

r = requests.get('http://api_endpoint_here/anaurls?urls[0]=domaincrawler.com&urls[1]=nic.se', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/anaurls?urls[0]=domaincrawler.com&urls[1]=nic.se"

The above command returns JSON structured like this:

{
  "response": [
    {
      "backlinks": 364,
      "refdoms": 132,
      "url": "nic.se"
    },
    {
      "backlinks": 4794,
      "refdoms": 389,
      "url": "domaincrawler.com"
    }
  ],
  "count": 2,
  "status": 200
}

This endpoint returns backlink and referring domain count for a list of URLs, domains or subdomains.

HTTP Request

GET http://api_endpoint_here/anaurls?urls[0]=domaincrawler.com&urls[1]=nic.se

URL Parameters

Parameter Default Description
urls[]
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[].url URL, host or domain from query arguments
.response[].refdoms Number of referring domains
.response[].backlinks Number of referring links
.count Total number of items in the collection
.status HTTP Status Code

sr

import requests

r = requests.get('http://api_endpoint_here/sr?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/sr?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "response": [
    {
      "sr": 2.78
    }
  ],
  "status": 200
}

This endpoint returns IV Search Rank (Page Rank)

HTTP Request

GET http://api_endpoint_here/sr?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.

Response Description

Key Description
.response[0].sr IV Search Rank of the queried domain
.status HTTP Status Code

srspread

import requests

r = requests.get('http://api_endpoint_here/srspread?url=domaincrawler.com&showgroup=true', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/srspread?url=domaincrawler.com&showgroup=true"

The above command returns JSON structured like this:

{
  "response": [
    {
      "links": 2398,
      "sr": "0"
    },
    {
      "links": 0,
      "sr": "1-2"
    },
    {
      "links": 0,
      "sr": "3-4"
    },
    {
      "links": 0,
      "sr": "5-6"
    },
    {
      "links": 0,
      "sr": "7-8"
    },
    {
      "links": 0,
      "sr": "9-10"
    }
  ],
  "status": 200
}

This endpoint returns IV Search Rank (Page Rank) distribution of backlinks.

HTTP Request

GET http://api_endpoint_here/srspread?url=domaincrawler.com

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
showgroup false When enabled, groups results greater zero in bins of two

Response Description

Key Description
.response[].sr IV Search Rank (Page Rank)
.response[].links Number of links
.status HTTP Status Code
import requests

r = requests.get('http://api_endpoint_here/exportbacklinks?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/exportbacklinks?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "next": "u-FqSXwgyTuGfEDg8P99YylhCRaQgT9I3mPxOg1K3onSdJXWx98tMpBrs1Vu0j24rUqXp4-skOR6aGv629W-qPiARoLotDJk",
  "response": [
    {
      "nofollow": false,
      "country": "United States",
      "extL": 72,
      "insDate": "20200810",
      "city": "",
      "ip": "151.101.2.217",
      "active": true,
      "intL": 49,
      "title": "WebPageTest Test Result - Singapore - Fire...atim.tnial.mil.id/ - 07/07",
      "linktarget": "https://domaincrawler.com/",
      "updDate": "20200811",
      "ccode": "US",
      "blpcount": 1,
      "backlinkpage": "http://www.webpagetest.org/result/170707_W1_11JR/",
      "pos": 127,
      "sumL": 121,
      "linktext": "",
      "tag": "a",
      "sr": 1.09
    }
  ],
  "count": 1,
  "status": 200
}

This endpoint is used to export backlinks without pre-indexing in the search engine.

HTTP Request

GET http://api_endpoint_here/exportbacklinks?url=domaincrawler.com
GET http://api_endpoint_here/exportbacklinks?url=domaincrawler.com&next=u-FqSXwgyTuGfEDg8P99YylhCRaQgT9Is_ZukeQu49E1afEDOTo6lX489Dwhqc4UFXuHEkGLgnwwpUlJm5sfb18kcybFJdwo

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
next None Token to retrieve next page results

Response Description

Key Description
.next Token to retrieve next page results
.response[].nofollow Flag to mark if the backlink is nofollow
.response[].country Country name of the backlink page server location
.response[].city City name of the backlink page server location
.response[].backlinkpage URL on which the backlink was found
.response[].title Page title of the page on which the backlink was found
.response[].linktarget Target URL of the link
.response[].linktext Anchor text
.response[].ip IP of the backlink page
.response[].tag Link type (layer, ilayer, a, aimg, frame, iframe, area, 301, 302, etc.)
.response[].intL Number of links on the backlink page that go to internal pages
.response[].extL Number of links on the backlink page that go to external pages
.response[].sumL Number of links on the backlink page
.response[].blpcount Number of backlinks from referring URL to the same domain
.response[].pos All links on the backlink page are numbered in order of appearance. This number therefore reflects the position of the link on the backlink page. “1” is the first position. The higher the number is the lower is the position of the link.
.response[].sr Backlink page IV Search Rank (Page Rank). IV calculates for each page the page strength from the weighted backlinks that point to this page. A page with many links is given a high IV Search Rank; a page with no or only a few links - a low IV SR
.response[].active Flag to mark if backlink is active
.response[].insDate Date when the link was created in the database
.response[].updDate Date of the update (recrawling)
.count Total number of items in the collection
.status HTTP Status Code

exportpages

import requests

r = requests.get('http://api_endpoint_here/exportpages?url=domaincrawler.com', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/exportpages?url=domaincrawler.com"

The above command returns JSON structured like this:

{
  "next": "u-FqSXwgyTuGfEDg8P99Y6lsFEdB2W6qaVL64P1uvNg",
  "response": [
    {
      "extL": [
        {
          "nofollow": false,
          "pos": 40,
          "anchor": "LinkedIn",
          "tag": "a",
          "url": "https://www.linkedin.com/company/domaincrawler/"
        }
      ],
      "intL": [
        {
          "nofollow": false,
          "pos": 18,
          "anchor": "Brand Protection",
          "tag": "a",
          "url": "https://domaincrawler.com/brand-protection/"
        }
      ],
      "info": {
        "ccode": "SE",
        "charset": "utf-8",
        "insDate": "20190822",
        "ip": "151.248.0.210",
        "description": "Domain Crawler. Empowering with data. Over 10 billion records is the reason to trust us. Learn more about Sales Intelligence & Cyber Security",
        "content-type": "text/html",
        "title": "DomainCrawler | Sales Intelligence & Cyber Security",
        "http-code": 200,
        "url": "https://domaincrawler.com/",
        "sr": 2.63
      }
    }
  ],
  "count": 1,
  "status": 200
}

This endpoint is used to export pages without pre-indexing in the search engine.

HTTP Request

GET http://api_endpoint_here/exportpages?url=domaincrawler.com
GET http://api_endpoint_here/exportpages?url=domaincrawler.com&next=u-FqSXwgyTuGfEDg8P99Y6lsFEdB2W6qaVL64P1uvNg

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
limit 50 Limit returned data. Upper limit is 10.000 rows.
next None Token to retrieve next page results

Response Description

Key Description
.next Token to retrieve next page results
.response[].info.url Page url
.response[].info.ip Page IP address
.response[].info.ccode Country name of the server location
.response[].info.content-type Content-Type
.response[].info.http-code Server http status code
.response[].info.charset Page charset
.response[].info.title Page title
.response[].info.description Page description
.response[].info.sr IV Search Rank (Page Rank)
.response[].info.insDate Date when the page was inserted in the database
.response[].intL[].nofollow Internal link nofollow flag
.response[].intL[].pos Internal link position
.response[].intL[].tag Internal link type (layer, ilayer, a, aimg, frame, iframe, area, 301, 302, etc.)
.response[].intL[].anchor Internal link anchor text
.response[].intL[].url Internal link url
.response[].extL[].nofollow External link nofollow flag
.response[].extL[].pos External link position
.response[].extL[].tag External link type (layer, ilayer, a, aimg, frame, iframe, area, 301, 302, etc.)
.response[].extL[].anchor External link anchor text
.response[].extL[].url Link url
.count Total number of items in the collection
.status HTTP Status Code

iscached

import requests

r = requests.post('http://api_endpoint_here/iscached', json={'url': 'domaincrawler.com', 'resources': ['backlinks', 'httpstatus']}, auth=('user', 'password'))
print(r.json())
# or
r = requests.get('http://api_endpoint_here/iscached?url=domaincrawler.com&resources=backlinks,httpstatus', auth=('user', 'password'))
print(r.json())
curl -u "user:password" "http://api_endpoint_here/iscached" -H 'Content-type: application/json' -d '{"url":"domaincrawler.com","resources":["backlinks","httpstatus"]}'
# or
curl -u "user:password" "http://api_endpoint_here/iscached?url=domaincrawler.com&resources=backlinks,httpstatus"

The above command returns JSON structured like this:

{
  "response": {
    "httpstatus": true,
    "backlinks": true
  }
}

This endpoint is used to check if the data was exported into API for given endpoints list. Only works with endpoints that has exportFinished field in response.

HTTP Request

POST http://api_endpoint_here/iscached
GET http://api_endpoint_here/iscached?url=domaincrawler.com&resources=backlinks,httpstatus

URL Parameters

Parameter Default Description
url
(required)
None URL, host or domain. May include protocol.
Special characters in URL params(?, &, = etc.) should be encoded.
resources
(required)
None List of endpoints.

Response Description

Key Description
.response.<endpoint_name> Name of the endpoint

Errors

The Backlink API uses the following error codes:

Error Code Meaning
400 Bad Request – Check you query arguments.
401 Unauthorized – Login or password is wrong.
404 Not Found – The specified endpoint could not be found.
405 Method Not Allowed – You tried to access endpoint with an invalid method.
500 Internal Server Error – We had a problem with our server. Try again later.