logpie/scraper-kit
GitHub: logpie/scraper-kit
基于适配器模式的反爬虫网页抓取框架,内置指纹伪装和被动 API 拦截,专为高防护网站设计。
Stars: 1 | Forks: 0
# scraper-kit
基于适配器的网页抓取框架,内置反机器人规避功能。专为对抗爬虫的网站设计 —— 应对 CAPTCHAs、指纹识别和频率限制。
只需编写站点适配器,即可免费获得拟人化浏览、被动 API 拦截、健康监控和故障诊断功能。
## 安装
```
pip install git+https://github.com/logpie/scraper-kit.git
```
然后安装浏览器二进制文件:
```
python -m patchright install chromium
```
## 快速开始
```
from patchright.sync_api import sync_playwright
from scraper_kit.browser import find_system_chrome, launch_cdp_browser, setup_cdp_stealth
from scraper_kit.engine import fetch_posts
from my_site import MySiteAdapter
adapter = MySiteAdapter()
with sync_playwright() as p:
chrome_path = find_system_chrome()
browser, proc = launch_cdp_browser(p, chrome_path, headed=False)
context = browser.contexts[0]
page = context.pages[0]
setup_cdp_stealth(page, context, browser.version)
posts = fetch_posts(
adapter, "search keyword",
page=page, context=context,
max_posts=10,
max_comments=5,
)
for post in posts:
print(f"{post['user']}: {post['title'][:60]}")
print(f" likes={post['likes']} comments={post['comments']}")
browser.close()
```
## 工作原理
该框架运行一个循环:**搜索 → 提取卡片 → 打开每张卡片 → 捕获数据 → 关闭 → 下一张卡片**。对于每张卡片,它首先尝试从站点自身的 API 调用中获取数据(被动拦截),然后回退到 DOM 抓取。
```
adapter.search(page, keyword)
adapter.apply_filters(page, sort, time_window)
│
│ ┌─── Page loop (up to max_pages) ─────────────────────────┐
│ │ │
│ │ adapter.extract_cards(page) → list of card dicts │
│ │ │ │
│ │ │ ┌─── Card loop (for each card) ──────────────────┐ │
│ │ │ │ │ │
│ │ │ │ adapter.open_detail(page, card) │ │
│ │ │ │ adapter.has_captcha(page) │ │
│ │ │ │ adapter.wait_for_detail(page, card, timeout) │ │
│ │ │ │ │ │
│ │ │ │ ┌─ Data merge (API wins, DOM fills gaps) ──┐ │ │
│ │ │ │ │ PassiveTap buffer (from get_api_routes) │ │ │
│ │ │ │ │ adapter.extract_detail(page, card) │ │ │
│ │ │ │ │ adapter.extract_comments(page, max) │ │ │
│ │ │ │ └──────────────────────────────────────────┘ │ │
│ │ │ │ │ │
│ │ │ │ adapter.take_screenshot(page, card, dir) │ │
│ │ │ │ adapter.close_detail(page, card) │ │
│ │ │ └─────────────────────────────────────────────────┘ │
│ │ │ │
│ │ scroll down, repeat │
│ └───────────────────────────────────────────────────────────┘
```
您的适配器控制所有特定于站点的行为。引擎负责处理时间控制、健康监控、去重和重试逻辑。
### 被动 API 拦截
大多数网站在您点击进入帖子时会通过内部 API 调用加载数据。PassiveTap 监听 `page.on("response")` 并自动捕获这些调用 —— 零额外网络请求,对频率限制器不可见。
您告诉框架要监听哪些 URL 以及如何解析它们:
```
def get_api_routes(self):
return {
"/api/v1/post/detail": self._parse_feed, # URL substring to match
"/api/v1/comment/list": self._parse_comments,
}
def _parse_feed(self, body):
post = body["data"]
return "feed", { # must return ("feed", post_dict)
"note_id": post["id"],
"title": post["title"],
"content": post["text"],
"likes": str(post["like_count"]),
"comments": str(post["comment_count"]),
"date": post["created_at"],
}
def _parse_comments(self, body):
comments = [{"user": c["user"]["name"], "text": c["text"]}
for c in body["data"]["comments"]]
return "comments", comments # must return ("comments", list)
def extract_note_id_from_api(self, data_type, data):
if data_type == "feed":
return data.get("note_id", "")
return "" # framework falls back to URL parsing
```
当引擎打开一张卡片且站点触发其自身的 API 调用时,PassiveTap 捕获响应,通过您的处理程序解析它,并将其与任何 DOM 数据合并。对于结构化字段,API 数据优先;DOM 用于填补空白。
如果您跳过被动拦截(`get_api_routes` 返回 `{}`),框架仍然有效 —— 只是完全依赖 `extract_detail()` 和 `extract_comments()` 进行 DOM 抓取。
### 反机器人规避
内置功能,无需配置:
- **隐身填充 (Stealth shims)** — 修复指纹泄漏:`outerWidth/outerHeight`、`navigator.permissions`、`navigator.connection`、WebGL 参数、Error 堆栈跟踪
- **拟人行为** — 带颤抖的贝塞尔曲线鼠标移动、对数正态睡眠分布、惯性触控板滚动
- **自适应健康监控** — 当网站反击时自动退避,当会话失效时停止
### 故障包
获取详情失败时的诊断快照:
```
posts = fetch_posts(
adapter, keyword,
page=page, context=context,
failure_bundle_verbosity="standard", # off | minimal | standard | full
)
```
包内容包括:页面 URL/标题/文本、tap 缓冲区状态、计时、健康评分。保存至 `data/logs/failures/{site}/{keyword}/`。默认为 `"off"`。
## 编写适配器
### 方法论
您不是通过阅读协议定义和填充占位符来编写适配器。您是通过在 Chrome DevTools 中探索站点并将看到的内容转换为适配器方法来编写的。工作流程如下:
**步骤 1:在 DevTools 中侦察站点**(30 分钟)
在 Chrome 中打开站点。搜索关键字。打开 DevTools → Network 标签页(按 XHR/Fetch 过滤)。点击进入帖子并观察触发了哪些 API 调用。
您需要寻找:
- **搜索结果**:哪个端点返回帖子列表?它有哪些字段?
- **帖子详情**:当您点击帖子时,是否有 API 调用返回完整内容?还是全部是服务器渲染的 HTML?
- **评论**:是单独的 API 调用,还是嵌入在页面中?
记下 URL 模式并查看响应 JSON 结构。
**步骤 2:弄清 UI 模型**
两种常见模式:
- **模态框 (Modal)**(如 XHS):点击搜索结果打开一个覆盖层。搜索结果保留在下方的 DOM 中。关闭方式:按 Escape 或点击外部。
- **导航 (Navigation)**(如 Douyin):点击结果导航到新页面。返回方式:`page.go_back()`。
这决定了您如何实现 `open_detail()` 和 `close_detail()`。
**步骤 3:寻找稳定的选择器**
在 DevTools → Elements 中,检查搜索结果和帖子详情。寻找:
- `data-*` 属性(例如 `data-e2e="search-card"`)—— 能在重新设计后幸存
- 语义化的类名(例如 `.note-item`, `.comment-list`)—— 中等稳定
- 混淆的类名(例如 `.css-1a2b3c`)—— **避免使用**,每次部署都会更改
如果站点混淆了所有内容,`data-*` 属性或结构选择器(`a[href*="/video/"]`)是您唯一的选择。
**步骤 4:增量编写适配器**
不要一次实现所有 18 个方法。从最重要的 4 个开始:
```
class MySiteAdapter:
name = "mysite"
base_url = "https://www.example.com"
def search(self, page, keyword): ...
def extract_cards(self, page): ...
def open_detail(self, page, card): ...
def extract_detail(self, page, card): ...
```
在继续之前交互式地测试每一个(`headed=True`,使用 print 语句单步执行)。然后添加其余部分。
### 完整适配器示例
这是一个带有解释每个决策的注释的真实适配器。真实适配器使用带有 JS 代码块的 `page.evaluate()` 进行提取(更快,往返次数更少),而不是 `page.query_selector()` 链。
```
from scraper_kit.adapter import SiteAdapter
class MySiteAdapter:
name = "mysite"
base_url = "https://www.example.com"
# ── Search ───────────────────────────────────────────────
def search(self, page, keyword):
"""Navigate to search results. Return the search URL."""
page.goto(f"{self.base_url}/search?q={keyword}",
wait_until="domcontentloaded", timeout=30000)
# Wait for results to render — pick a selector that proves
# the results are loaded, not just the page shell.
page.wait_for_selector(".result-list", timeout=15000)
return page.url
def apply_filters(self, page, sort, time_window):
"""Click filter buttons in the UI.
sort values: "general", "latest", "most-liked", "most-commented"
time_window values: "day", "week", "half-year", "any"
Map these to your site's filter labels.
"""
if sort == "general" and time_window == "any":
return # defaults, nothing to click
labels = {"latest": "最新", "most-liked": "最多点赞"}
label = labels.get(sort)
if label:
page.evaluate(f"""() => {{
const btn = [...document.querySelectorAll('button, span')]
.find(el => el.textContent.trim() === '{label}');
if (btn) btn.click();
}}""")
# ── Card extraction ──────────────────────────────────────
def extract_cards(self, page):
"""Extract card dicts from search results.
Must return at least 'note_id' per card. Include anything
cheap to grab (title, author, likes) — the engine uses these
for dedup and seen-set comparison.
"""
return page.evaluate("""() => {
const cards = [];
const seen = new Set();
document.querySelectorAll('.result-card').forEach(el => {
const id = el.dataset.postId;
if (!id || seen.has(id)) return;
seen.add(id);
cards.push({
note_id: id,
url: el.querySelector('a')?.href || '',
title: el.querySelector('h3')?.textContent?.trim() || '',
user: el.querySelector('.author')?.textContent?.trim() || '',
likes_from_card: el.querySelector('.likes')?.textContent?.trim() || '0',
});
});
return cards;
}""")
# ── Detail view ──────────────────────────────────────────
def open_detail(self, page, card):
"""Open the post. Return True if the click worked.
For modal sites: click the card element.
For navigation sites: click the link.
"""
el = page.query_selector(f'[data-post-id="{card["note_id"]}"]')
if not el:
return False
el.click()
return True
def wait_for_detail(self, page, card, timeout=8000):
"""Wait for the post content to appear.
Pick a selector that means "content is loaded", not just
"the modal/page shell appeared". If the content loads via
a separate API call, wait for the text container.
"""
try:
page.wait_for_selector(".post-body", timeout=timeout)
return True
except Exception:
return False
def extract_detail(self, page, card):
"""Scrape post data from the DOM.
The framework merges this with passive API data (if any).
Return as many REQUIRED_POST_KEYS as you can. Missing keys
fall back to card data.
"""
return page.evaluate("""() => {
const content = document.querySelector('.post-body')?.innerText?.trim() || '';
const likes = document.querySelector('.detail .like-count')?.textContent?.trim() || '';
const comments = document.querySelector('.detail .comment-count')?.textContent?.trim() || '';
const date = document.querySelector('.post-date')?.textContent?.trim() || '';
const video = document.querySelector('video[src]');
const video_url = (video && !video.src.startsWith('blob:')) ? video.src : '';
return { content, likes, comments, date, video_url };
}""")
def extract_comments(self, page, max_comments=10):
"""Scrape comments from the DOM.
Scroll the comment container to load more. Use stale-round
detection to stop when no new comments appear.
"""
# Scroll to load comments
for _ in range(max_comments):
page.evaluate("() => document.querySelector('.comments')?.scrollBy(0, 500)")
page.wait_for_timeout(500)
return page.evaluate("""(max) => {
return [...document.querySelectorAll('.comment-item')].slice(0, max).map(el => ({
user: el.querySelector('.author')?.textContent?.trim() || '',
text: el.querySelector('.text')?.textContent?.trim() || '',
likes: el.querySelector('.likes')?.textContent?.trim() || '0',
}));
}""", max_comments)
def close_detail(self, page, card):
"""Dismiss the detail view and get back to search results.
CRITICAL CONTRACT: after this returns, extract_cards() must
work again. If it doesn't, the page loop stalls.
Modal sites: press Escape or click the close button.
Navigation sites: page.go_back() + wait for search results.
"""
page.keyboard.press("Escape")
page.wait_for_timeout(500)
# Verify search results are visible again
page.wait_for_selector(".result-list", timeout=5000)
def take_screenshot(self, page, card, screenshot_dir):
"""Optional. Return file path or empty string."""
return ""
# ── Passive API interception ─────────────────────────────
# Skip if the site doesn't have useful API calls, or implement
# later once basic DOM scraping works.
def get_api_routes(self):
return {}
def extract_note_id_from_api(self, data_type, data):
return ""
# ── Anti-bot ─────────────────────────────────────────────
def has_captcha(self, page):
"""Check for CAPTCHA/bot-detection overlays.
IMPORTANT: Match on structural elements, not text content.
Generic text like "验证" appears in normal UI (e.g., "已验证"
in user profiles). False positives cause the engine to bail
on healthy sessions.
"""
return page.evaluate("""() => {
const wall = document.querySelector('.captcha-overlay, .captcha-modal');
return !!(wall && wall.offsetHeight > 0);
}""")
def dismiss_captcha(self, page):
return False # Manual intervention needed
def has_auth_evidence(self, page):
"""Check if the session expired (login wall visible)."""
return bool(page.query_selector(".login-modal, .auth-required"))
def ensure_loaded(self, page):
"""Called once after browser opens. Navigate to the site and
verify it's ready — e.g., wait for a JS function to exist,
or for the main content area to render."""
page.goto(self.base_url, wait_until="domcontentloaded")
# ── Content parsing ──────────────────────────────────────
def parse_date_age_days(self, date_str):
"""Parse site-specific date strings to age in days.
Used by the filtering layer to apply time-window filters.
Return None if unparseable — the post will be included
regardless of time window.
"""
return None
def parse_engagement(self, value):
"""Parse engagement strings like "1.2k", "3.5万", "1,234".
Used by the filtering layer for trending detection (comparing
current engagement against a previous snapshot).
"""
if not value:
return 0
s = str(value).strip().lower()
if s.endswith("k"):
try: return int(float(s[:-1]) * 1000)
except: return 0
if s.endswith("万"):
try: return int(float(s[:-1]) * 10000)
except: return 0
try:
return int(s.replace(",", ""))
except (ValueError, AttributeError):
return 0
def build_post_url(self, note_id):
"""Canonical URL for a post. Used as fallback if card/detail
didn't provide one."""
return f"{self.base_url}/post/{note_id}"
# ── Browser config ───────────────────────────────────────
def get_cdp_args(self):
"""Extra Chrome flags. Usually empty."""
return []
def get_locale(self):
"""Browser locale. Match the site's language."""
return "en-US"
def get_session_cookie_name(self):
"""The cookie that proves the session is alive.
Used by the framework to detect expiring sessions."""
return "session_id"
```
### 帖子 Schema
`extract_detail()` 必须返回(或在卡片 + API 数据合并后贡献)以下键:
```
REQUIRED_POST_KEYS = {
"note_id", # unique identifier — the card already provides this
"url", # canonical URL — card or build_post_url() fallback
"title", # post title (can be "")
"content", # body text
"user", # author name
"likes", # string — e.g., "1.2万", "350"
"comments", # string
"date", # display date — e.g., "2天前", "2025-01-15"
}
```
可选:`collects`、`shares`、`tags`、`cover_url`、`video_url`、`image_urls`、`top_comments`、`screenshot`。
您不需要从每个方法返回每个必需的键。引擎合并来自三个来源(卡片 → API → DOM)的数据,每个来源填充它能填充的部分。
### 增量测试
使用有头浏览器进行测试,以便您可以看到发生了什么:
```
from patchright.sync_api import sync_playwright
from scraper_kit.browser import find_system_chrome, launch_cdp_browser, setup_cdp_stealth
adapter = MySiteAdapter()
with sync_playwright() as p:
browser, proc = launch_cdp_browser(p, find_system_chrome(), headed=True)
context = browser.contexts[0]
page = context.pages[0]
setup_cdp_stealth(page, context, browser.version)
# Test search
adapter.ensure_loaded(page)
url = adapter.search(page, "test keyword")
print(f"Search URL: {url}")
# Test card extraction
cards = adapter.extract_cards(page)
print(f"Found {len(cards)} cards")
for c in cards[:3]:
print(f" {c['note_id']}: {c.get('title', '')[:50]}")
# Test detail fetch on the first card
if cards:
card = cards[0]
opened = adapter.open_detail(page, card)
print(f"Opened: {opened}")
loaded = adapter.wait_for_detail(page, card)
print(f"Loaded: {loaded}")
detail = adapter.extract_detail(page, card)
print(f"Detail: {detail}")
adapter.close_detail(page, card)
print("Closed, back to search results")
input("Press Enter to close browser...")
browser.close()
```
对于自动化测试,验证协议合规性:
```
from scraper_kit.adapter import SiteAdapter, REQUIRED_POST_KEYS
def test_implements_protocol():
assert isinstance(MySiteAdapter(), SiteAdapter)
def test_extract_detail_has_required_keys():
detail = parse_detail_api_response(sample_response) # unit-test your parser
missing = REQUIRED_POST_KEYS - detail.keys()
assert not missing, f"Missing: {missing}"
```
### 常见错误
| 错误 | 后果 | 修复 |
|---------|-------------|-----|
| `close_detail` 未恢复搜索结果 | 页面循环停滞 — `extract_cards` 在下一页返回 `[]` | 关闭后,在搜索结果元素上使用 `wait_for_selector` |
| `has_captcha` 匹配了普通 UI 文本 | 引擎在健康的会话上退出 | 匹配结构性覆盖层元素,而不是像 "验证" 这样的文本 |
| `extract_cards` 返回重复项 | 引擎浪费时间重新获取已见过的帖子 | 返回前按 `note_id` 去重 |
| `wait_for_detail` 使用了错误的选择器 | 在内容加载前返回 `True`,详情提取得到空字符串 | 等待内容容器,而不是页面外壳 |
| `extract_detail` 使用 `page.query_selector` 链 | 慢(多次往返浏览器) | 使用带有 JS 代码块的单个 `page.evaluate()` |
| `parse_engagement` 未处理语言环境后缀 | 已见集合比较中断,趋势检测失败 | 处理 `k`、`万`、`w`、逗号 |
| `get_api_routes` 解析器在意外响应时抛出异常 | 导致 PassiveTap 监听器崩溃,所有后续捕获失败 | 将解析器主体包裹在 try/except 中,失败时返回 `None` |
## `fetch_posts()` 参考
```
from scraper_kit.engine import fetch_posts
posts = fetch_posts(
adapter, # your SiteAdapter instance
"keyword", # search keyword
page=page, # open Playwright page
context=context, # Playwright browser context
max_pages=20, # pages of search results to scroll
max_posts=10, # stop after this many posts
max_comments=5, # max comments per post
sort="latest", # passed to adapter.apply_filters()
analysis_window="week", # passed to adapter.apply_filters()
seen_data={"note_id": {...}}, # previously seen posts (for dedup/trending)
grind=False, # True = extra cooldown rounds instead of stopping
event_logger=logger, # optional FetchEventLogger
failure_bundle_verbosity="off", # off | minimal | standard | full
)
```
返回帖子字典列表。框架内部处理去重、基于健康状态的停止和自适应延迟。
## 架构
```
scraper_kit/
├── adapter.py # SiteAdapter Protocol definition
├── browser/ # Chrome launch, stealth, cookies, UA
│ ├── chrome.py # find_system_chrome(), launch_cdp_browser()
│ ├── stealth.py # build_stealth_shim(), setup_cdp_stealth()
│ ├── cookies.py # migrate_cookies()
│ ├── session.py # open_browser() context manager
│ └── ua.py # User-Agent building
├── engine/ # Core scraping orchestration
│ ├── orchestrator.py # fetch_posts() entry point
│ ├── hybrid.py # Hybrid strategy (UI + passive API)
│ ├── passive_tap.py # PassiveTap, WaitResult, wait_for()
│ ├── health.py # HealthMonitor (rolling-window scoring)
│ ├── failure_bundle.py # Diagnostic snapshots on failure
│ └── errors.py # ScraperSignal, ScraperError
├── human/ # Human-like behavior
│ └── behavior.py # Bezier mouse, inertial scroll, log-normal sleep
├── filtering/ # Dedup and seen-set management
│ ├── seen.py # load_seen(), save_seen(), should_refetch()
│ ├── card_filter.py # filter_cards()
│ └── counting.py # fetch_count(), grind_count()
└── telemetry/ # Structured JSONL event logging
└── logger.py # FetchEventLogger
```
### 关键设计决策
- **适配器模式** — 所有特定于站点的逻辑都存在于适配器中。引擎从不直接接触选择器、URL 或 API 端点。
- **无浏览器生命周期管理** — 框架接收打开的页面/上下文,从不创建。您的应用控制浏览器。
- **被动优于主动** — 拦截站点自身的 API 调用,而不是发起新的调用。对频率限制器不可见。
- **健康驱动** — 基于滚动成功/失败窗口的自适应延迟和自动停止。没有固定的重试次数。
## 遥测
每次抓取运行都会写入结构化 JSONL 事件:
```
from scraper_kit.telemetry import FetchEventLogger
with FetchEventLogger("run_001", "keyword", site="mysite") as logger:
posts = fetch_posts(
adapter, "keyword",
page=page, context=context,
event_logger=logger,
)
```
事件:`search_start`、`card_attempt`、`card_result`、`cards_skipped`、`search_end`、`failure_dump`、`run_end`。每行包括时间戳、运行 ID、关键字、健康评分和数据源。
## 系统要求
- Python 3.10+
- 系统 Chrome 或 Chromium(用于 CDP 模式)—— 或回退到 Playwright 捆绑的 Chromium
- [patchright](https://github.com/nicezombie/patchright)(自动安装)
## 许可证
MIT
标签:API拦截, CDP协议, Headless Browser, IP 地址批量处理, Playwright, Python, Web Scraping, 云资产清单, 反检测, 开源情报工具, 数据可视化, 数据提取, 无后门, 浏览器指纹伪造, 渗透测试辅助, 特征检测, 自动化数据采集, 适配器模式, 逆向工具, 逆向工程, 速率限制规避, 验证码绕过