Alli Ai icon final-01

Alli AI Web Crawler Information

What is the Alli AI Crawler?

The Alli AI web crawler is a key component of our SEO automation platform. It systematically scans and analyzes websites to collect data, enabling intelligent, automated optimizations that improve search engine rankings.


Why Does Alli AI Crawl Websites?

Unlike general search engine crawlers that index random websites, the Alli AI crawler only accesses sites that users have explicitly added to our platform. By doing so, users authorize Alli AI to:

  • Automate On-Page SEO: Our crawler collects website data to apply bulk optimizations, such as refining meta tags, headers, and keyword strategies.
  • Enhance Schema Markup: Understanding site structure allows the crawler to generate schema markup, making content more search-engine friendly.
  • Improve Internal Linking: The crawler maps website architecture to suggest or automate internal linking strategies, optimizing navigation and user experience.

 

Alli AI User-Agent Details

User-Agent Strings

When accessing websites, the Alli AI crawler identifies itself with the following user-agent strings:

Desktop:
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.6261.0 Safari/537.36 AlliAI/1.0 (+https://www.alliai.com/crawler)

Mobile:
Mozilla/5.0 (iPhone; CPU iPhone OS 15_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.4 Mobile/15E148 Safari/604.1 AlliAI/1.0 (+https://www.alliai.com/crawler)


Note: Site owners only need to check for the “AlliAI” part of the user-agent string, as the rest may change over time.


Alli AI Bot

In addition to our crawler, we also have a bot that may be identified using the following user-agent string:

AlliAI/1.0 (+https://www.alliai.com/crawler)

This bot is primarily used to fetch separate files, such as downloading a website’s favicon or retrieving assets like scripts, styles, and images for optimization purposes.


How Alli AI Crawls Websites

Respect for Robots.txt

The Alli AI user-agent adheres to the rules specified in a site’s robots.txt file. To manage its access, site owners can define directives within their robots.txt file.

Crawl Frequency

Crawling frequency varies based on user settings and website size, ensuring minimal impact on server resources.

Identification

All Alli AI crawler requests originate from designated IP ranges. To verify Alli AI’s crawler activity, please contact our support team.


Managing Alli AI’s Access

If you wish to adjust or prevent Alli AI from crawling your website, you can do so by modifying your robots.txt file:
To block the Alli AI crawler, add:
User-agent: AlliAI
Disallow: /



Filtering Alli AI Crawler Traffic

To exclude Alli AI crawler traffic from your Google Analytics 4 (GA4) reports, refer to our guide: How to Exclude Alli AI’s Crawler Traffic from GA4 Reports.


Contact Information

For questions about the Alli AI crawler, user-agent verification, or managing access, reach out to our support team:
Email: support@alliai.com
Website: https://www.alliai.com
Help Center: Visit our Help Center for additional support and resources.