fix: Skip discovery when user provides direct discovery file URLs

When a user directly provides a URL to a discovery file (sitemap.xml, llms.txt, robots.txt, etc.),
the system now skips the discovery phase and uses the provided file directly.

This prevents unnecessary discovery attempts and respects the user's explicit choice.

Changes:
- Check if the URL is already a discovery target before running discovery
- Skip discovery for: sitemap files, llms variants, robots.txt, well-known files, and any .txt files
- Add logging to indicate when discovery is skipped

Example: When crawling 'xyz.com/sitemap.xml' directly, the system will now use that sitemap
instead of trying to discover a different file like llms.txt
This commit is contained in:
leex279
2025-09-20 13:34:07 +02:00
parent 7f74aea476
commit c1677a9220

View File

@@ -339,7 +339,19 @@ class CrawlingService:
# Discovery phase - find the single best related file
discovered_urls = []
if request.get("auto_discovery", True): # Default enabled
# Skip discovery if the URL itself is already a discovery target (sitemap, llms file, etc.)
is_already_discovery_target = (
self.url_handler.is_sitemap(url) or
self.url_handler.is_llms_variant(url) or
self.url_handler.is_robots_txt(url) or
self.url_handler.is_well_known_file(url) or
self.url_handler.is_txt(url) # Also skip for any .txt file that user provides directly
)
if is_already_discovery_target:
safe_logfire_info(f"Skipping discovery - URL is already a discovery target file: {url}")
if request.get("auto_discovery", True) and not is_already_discovery_target: # Default enabled, but skip if already a discovery file
await update_mapped_progress(
"discovery", 25, f"Discovering best related file for {url}", current_url=url
)