# Web Scraper Extract structured data from any website by providing a URL and a JSON schema. The system automatically selects the best scraping strategy, handles JavaScript-heavy sites, and uses AI to structure the extracted content according to your schema. ## Key Features - Automatic strategy selection (static, browser, or hybrid) - AI-powered data extraction - JavaScript and SPA support - Batch processing (up to 50 URLs) - Pagination support - Depth scraping (follow links) - Smart field mapping - Real-time progress tracking ## Authentication All endpoints require Bearer token authentication: ```bash X-API-Key: YOUR_ACCESS_TOKEN ``` [Get your API key from Freddy Hub](https://freddy-hub.aitronos.com/freddy/api) ## How It Works 1. **Site Analysis**: Automatically detects site complexity and determines optimal extraction strategy 2. **Content Extraction**: Uses appropriate engine (static, browser, or hybrid) to extract content 3. **Data Structuring**: Uses AI to structure extracted content according to your schema 4. **Validation**: Validates extracted data against your schema ## Supported Features - **Pagination**: Automatically follow pagination links to scrape multiple pages - **Depth Scraping**: Follow links from listing pages to detail pages - **Rate Limiting**: Intelligent rate limit avoidance with exponential backoff - **Progress Tracking**: Real-time progress updates via polling or Server-Sent Events - **Batch Processing**: Process multiple URLs in parallel ## Use Cases - E-commerce product data extraction - Real estate listings scraping - Job board data collection - News article extraction - Research data gathering - Price monitoring - Content aggregation ## Rate Limits - Single URL Scraping: 100 requests per minute per organization - Batch Scraping: 10 batches per minute per organization - Site Analysis: 200 requests per minute per organization - Job Status Checks: 1000 requests per minute per organization ## Next Steps - [Scrape Single URL](/docs/api-reference/scraper/scrape-single) - Extract data from a single URL - [Batch Scrape URLs](/docs/api-reference/scraper/scrape-batch) - Process multiple URLs in parallel - [Analyze Site](/docs/api-reference/scraper/analyze-site) - Analyze a website before scraping - [Get Job Status](/docs/api-reference/scraper/get-job-status) - Check the status of a scraping job