Platform Settings
  • 26 Nov 2023
  • Dark
    Light

Platform Settings

  • Dark
    Light

Article Summary

How does the Allowlist and Denylist work?

When you configure the Denylist on a defined object, like Anonymous Proxies, Bot Defender will set the score of that session to 100 (the highest available risk score). 

When you configure the Allowlist on a defined object, like Google crawlers,Bot Defender will set the score of that session to 0 (the lowest available risk score).

In what order is the Allowlist and Denylist processed?

The order of operations is:

  1. Check the user defined rules - they are processed in order (can be dragged into different orders)
  2. Check defined rules

What happens if I integrate risk cookie with my infrastructure and Bot Defender stops sending it for any reason?

Bot Defender maintains the ability to use a second set of failover servers that will always return a risk cookie with a score of 0 to ensure there is no time when you application will be without the risk cookie.

What is the best practice when creating a new application?

When creating a new application it is best practice to create a separate policy that goes with it, configure the risk cookie in the associated policy, and configure the filter rules before deploying the snippet.

Should I assign the same policy to more than one application?

You can assign the same policy to more than one application, but you should be aware of the following limitations.
If you change the risk cookie secret key in the policy, you will need to update all applications that are using the key for enforcement.

Only set the cookie on the parent domain if you to have the same risk cookie used on all the subdomains of your website.

Is it possible to have a staging application for development purposes?

Yes. Anyone with Admin role credentials can create a staging application in the HUMAN Portal.

Creating a staging application is really just creating an application that’s sole purpose is for staging. It can be implemented in your staging servers providing you testing capabilities. The staging policy can be shared with your production application to be able to test the production environment in staging. You can also have a staging policy to allow you to use development and testing tools that you would not allow in production. This depends on your architecture and needs.

To create a new policy, follow the Managing Policies details.

To create a new application:

  • In the Portal, go to Platform Settings > Applications > Create Application.
  • Enter the Application Name, and select the Policy from the account’s existing policies to apply on the application. One policy can be selected for an application.
  • For more details see Managing Applications.

What IPs should I allow in my outgoing firewall rules when using the Enforcer?

An up to date list of all IPs in use by our API is available in our API IPs documentation.

How can I search for a range of IPs? Using a * is not working?

You can search for a range of IPs or a partial IP using CIDR notation. CIDR notation is the IP address with a slash character / and a decimal number at the end of the IP address.

More information on how to search using CIDR notation is available here

Is there CAPTCHA coverage in China?

Yes, there is a full global coverage for the reCaptcha domain.

Why is there a discrepancy between the number of page views I see in my Analytics tool and the data and numbers I see in the Portal? What is the data presented in the new Portal Dashboard and Investigation tabs?

Bot Defender Console is Requests based.

Requests measure how much traffic a server handles, including traffic from bots not running Javascript. Calls to a server can include page downloads, comments on a thread, “likes”, graphic or image views, or any other action on a website. Therefore the number of requests is not identical to the number of pageviews, but it can correlate to it. The request metric cannot be used by Javascript-based services such as common analytics and marketing tools since they don’t have access to the server traffic.

For a full explanation, refer to the Data Type section of the Portal Documentation

Are all Known Bots & Crawlers coming out of a curated list?

The known bots list is updated automatically and on an ongoing basis, verified with identifiers like UA, IP, ASN Organization and more. We also track the behavior of bots from that list, keeping a very low risk for malicious bots to abuse this list .

How do I know which bots in the known bots list are identified only by User Agent, User Agent+IP etc?

Currently there is no option using the console. Please contact us if you have any questions and we'll provide the full details

I am having trouble parsing my User Agent list. I have separated the User Agents with commas (,), and the rule is mis-parsing. How do I separate between the different User Agents?

The best way to separate between User Agents is to use the |. Using comma separators can be problematic as many User Agents contain commas natively.

What user role should I give someone who needs to be able review detected bots and decide what action needs to be taken on the bot (Denylist/Allowlist)?

The best option is to give them the Security Admin role, which gives them permissions to modify the Access Control section of the policy, with read-only permissions for the rest of the console
Alternatively, you can give them read-only permissions, and if anything needs to be modified, they can Slack our team and we can do it.


Was this article helpful?

What's Next