UltimaScraperAPI is a modular Python scraping framework designed to interact with premium content platforms such as OnlyFans, Fansly, and LoyalFans. It provides a unified, async-first API for authentication, user data retrieval, posts, messages, and media downloads with comprehensive session management and caching capabilities.
Platform Status:
- β OnlyFans: Fully supported and stable
- π§ Fansly: Work in progress with limited functionality
- π§ LoyalFans: Work in progress with limited functionality
Read the full documentation β
- Installation Guide - Installation methods and requirements
- Quick Start Tutorial - Get up and running in minutes
- Configuration - Complete configuration reference
- Authentication - How to authenticate with platforms
- Working with APIs - Common operations and patterns
- Proxy Support - Configure proxies and VPNs
- Session Management - Redis integration and caching
- OnlyFans API - Complete OnlyFans API reference
- Fansly API - Fansly API reference (WIP)
- LoyalFans API - LoyalFans API reference (WIP)
- Helpers - Utility functions and helpers
- Architecture - System design and architecture
- Contributing Guide - How to contribute
- Testing - Running and writing tests
- π Multi-Platform Support: OnlyFans (stable), Fansly (WIP), and LoyalFans (WIP)
- β‘ Async-First Design: Built with
asyncioandaiohttpfor high performance - π Flexible Authentication: Cookie-based and guest authentication flows
- π¦ Unified Data Models: Consistent Pydantic models for users, posts, messages, and media
- π§ Highly Extensible: Modular architecture makes adding new platforms easy
- π Advanced Networking: Session management, connection pooling, proxy support (HTTP/HTTPS/SOCKS)
- π WebSocket Support: Real-time updates and live notifications
- πΎ Redis Integration: Optional caching, session persistence, and rate limiting
- π Type Safety: Comprehensive type hints and validation with Pydantic v2
- π DRM Support: Widevine CDM integration for encrypted content
- π― Rate Limiting: Built-in rate limiting and exponential backoff
- π‘οΈ Error Handling: Comprehensive error handling with retry mechanisms
- π Comprehensive Logging: Detailed logging for debugging and monitoring
- Python: 3.10, 3.11, 3.12, 3.13, or 3.14 (but less than 4.0)
- Package Manager: uv (recommended) or pip
- Optional: Redis 6.2+ for caching and session management
uv is a fast Python package installer and resolver:
# Install uv if you haven't already
pip install uv
# Install UltimaScraperAPI
uv pip install ultima-scraper-apipip install ultima-scraper-apiFor development or the latest features:
# Clone the repository
git clone https://github.com/UltimaHoarder/UltimaScraperAPI.git
cd UltimaScraperAPI
# Install with uv
uv pip install -e .
# Or with pip
pip install -e .Always use a virtual environment to avoid dependency conflicts:
# Create virtual environment
python -m venv venv
# Activate it
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
# Install the package
uv pip install ultima-scraper-apiimport asyncio
from pathlib import Path
from ultima_scraper_api import UltimaScraperAPI
from ultima_scraper_api.config import UltimaScraperAPIConfig
async def main():
# Initialize configuration
config = UltimaScraperAPIConfig()
# Initialize UltimaScraperAPI
api = UltimaScraperAPI(config)
await api.init() # Initialize the API
# Get OnlyFans API instance
onlyfans_api = api.api_instances.OnlyFans
# Authentication credentials
# Obtain these from your browser's Network tab (F12)
# See: https://ultimahoarder.github.io/UltimaScraperAPI/user-guide/authentication/
auth_json = {
"cookie": "your_cookie_value",
"user_agent": "your_user_agent",
"x-bc": "your_x-bc_token"
}
# Login with context manager for automatic cleanup
authed = await onlyfans_api.login(auth_json=auth_json)
if authed and authed.is_authed():
# Get authenticated user info
me = await authed.get_me()
print(f"Logged in as: {me.username}")
# Get user profile
user = await authed.get_user("username")
if user:
print(f"User: {user.username} ({user.name})")
# Fetch user's posts
posts = await user.get_posts(limit=10)
print(f"Found {len(posts)} posts")
# Download media from posts
download_dir = Path("downloads")
download_dir.mkdir(exist_ok=True)
for post in posts:
if post.media:
for media in post.media:
print(f"Downloading: {media.id}")
# Get media URL using url_picker
from ultima_scraper_api.apis.onlyfans import url_picker
media_url = url_picker(post.get_author(), media)
if media_url:
# Download media content
response = await authed.auth_session.request(
media_url.geturl(),
premade_settings=""
)
if response:
content = await response.read()
# Save to file
filename = f"{media.id}.{media.type}"
filepath = download_dir / filename
with open(filepath, 'wb') as f:
f.write(content)
print(f" β Saved: {filename}")
if __name__ == "__main__":
asyncio.run(main())For DRM-protected content, you need to configure Widevine CDM and follow a multi-step process:
import asyncio
from pathlib import Path
from ultima_scraper_api import UltimaScraperAPI
from ultima_scraper_api.config import UltimaScraperAPIConfig, DRM
async def download_drm_content():
# Configure DRM settings
config = UltimaScraperAPIConfig()
config.settings.drm.device_client_blob_filepath = Path("/path/to/device_client_id_blob")
config.settings.drm.device_private_key_filepath = Path("/path/to/device_private_key")
# Initialize API
api = UltimaScraperAPI(config)
await api.init()
onlyfans_api = api.api_instances.OnlyFans
auth_json = {
"cookie": "your_cookie_value",
"user_agent": "your_user_agent",
"x-bc": "your_x-bc_token"
}
authed = await onlyfans_api.login(auth_json=auth_json)
if authed and authed.is_authed():
# Get DRM manager
only_drm = authed.drm
if only_drm:
user = await authed.get_user("username")
posts = await user.get_posts(limit=10)
download_dir = Path("downloads")
download_dir.mkdir(exist_ok=True)
for post in posts:
if post.media:
for media in post.media:
# Check if media has DRM protection
if media.files and media.files.drm:
print(f"Processing DRM-protected media: {media.id}")
# Get cookies for DRM requests
cookies = media.files.drm.dash.__drm_media__.get_cookies()
# Resolve DRM URLs and get decryption key
video_url, audio_url, key = await media.files.drm.resolve_drm()
# Download encrypted video
response = await authed.auth_session.request(
video_url,
premade_settings="",
custom_cookies=cookies
)
enc_video_filepath = download_dir / f"{video_url.split('/')[-1]}"
with open(enc_video_filepath, "wb") as f:
f.write(await response.read())
print(f" Downloaded encrypted video: {enc_video_filepath.name}")
# Download encrypted audio
response = await authed.auth_session.request(
audio_url,
premade_settings="",
custom_cookies=cookies
)
enc_audio_filepath = download_dir / f"{audio_url.split('/')[-1]}"
with open(enc_audio_filepath, "wb") as f:
f.write(await response.read())
print(f" Downloaded encrypted audio: {enc_audio_filepath.name}")
# Decrypt files
decrypted_video = only_drm.decrypt_file(enc_video_filepath, key)
decrypted_audio = only_drm.decrypt_file(enc_audio_filepath, key)
print(f" Decrypted video: {decrypted_video.name}")
print(f" Decrypted audio: {decrypted_audio.name}")
# Merge video and audio
output_filepath = download_dir / f"{media.id}_final.mp4"
future = only_drm.enqueue_merge_task(
output_filepath,
[decrypted_video, decrypted_audio]
)
# Wait for merge to complete
success = await asyncio.wrap_future(future)
if success:
print(f" β Merged to: {output_filepath.name}")
# Clean up temporary files
enc_video_filepath.unlink()
enc_audio_filepath.unlink()
decrypted_video.unlink()
decrypted_audio.unlink()
if __name__ == "__main__":
asyncio.run(download_drm_content())DRM Setup Requirements:
-
Widevine CDM Files: You need valid Widevine CDM files:
device_client_id_blob: Client ID blob filedevice_private_key: Private key file
-
FFmpeg: Required for merging video and audio streams
# Install FFmpeg sudo apt install ffmpeg # Ubuntu/Debian brew install ffmpeg # macOS
-
Configure in your config:
config.settings.drm.device_client_blob_filepath = Path("/path/to/device_client_id_blob") config.settings.drm.device_private_key_filepath = Path("/path/to/device_private_key")
Note: DRM content requires proper Widevine CDM setup. Obtaining CDM files is beyond the scope of this documentation and must comply with applicable laws and terms of service.
You need three pieces of information from your browser:
- Cookie: Your session cookie
- User-Agent: Your browser's user agent string
- x-bc (OnlyFans only): Dynamic authorization token
Quick Steps:
- Open your browser and navigate to the platform
- Open Developer Tools (F12)
- Go to the Network tab
- Look for API requests and copy the required headers
For detailed instructions with screenshots, see the Authentication Guide.
Some platforms support guest access for public content:
async with api.login_context(guest=True) as authed:
# Limited operations available (public profiles, posts, etc.)
user = await authed.get_user("public_username")
if user:
print(f"Public profile: {user.username}")from ultima_scraper_api import UltimaScraperAPIConfig
# Load from JSON file
config = UltimaScraperAPIConfig.from_json_file("config.json")
# Or create programmatically
config = UltimaScraperAPIConfig()# Set up your credentials
export ONLYFANS_COOKIE="your_cookie_value"
export ONLYFANS_USER_AGENT="Mozilla/5.0 ..."
export ONLYFANS_XBC="your_x-bc_token"Then load them in your code:
import os
auth_json = {
"cookie": os.getenv("ONLYFANS_COOKIE"),
"user_agent": os.getenv("ONLYFANS_USER_AGENT"),
"x-bc": os.getenv("ONLYFANS_XBC")
}Configure HTTP, HTTPS, or SOCKS proxies:
from ultima_scraper_api import UltimaScraperAPIConfig
from ultima_scraper_api.config import Network, Proxy
config = UltimaScraperAPIConfig(
network=Network(
proxy=Proxy(
http="http://proxy.example.com:8080",
https="https://proxy.example.com:8080",
# Or SOCKS proxy
# http="socks5://proxy.example.com:1080"
)
)
)Enable Redis for caching and session management:
from ultima_scraper_api.config import Redis
config = UltimaScraperAPIConfig(
redis=Redis(
host="localhost",
port=6379,
db=0,
password="your_password" # Optional
)
)For complete configuration options, see the Configuration Guide.
async with api.login_context(auth_json) as authed:
# Get all active subscriptions
subscriptions = await authed.get_subscriptions()
for sub in subscriptions:
user = sub.user
print(f"{user.username} - Subscribed: {sub.subscribed_at}")
print(f" Expires: {sub.expires_at}")
print(f" Price: ${sub.price}")async with api.login_context(auth_json) as authed:
# Get a specific user
user = await authed.get_user("username")
# Fetch message conversation
messages = await user.get_messages(limit=50)
for msg in messages:
print(f"[{msg.created_at}] {msg.from_user.username}: {msg.text}")
# Check for media attachments
if msg.media:
print(f" Attachments: {len(msg.media)} media files")import aiofiles
async with api.login_context(auth_json) as authed:
user = await authed.get_user("username")
# Get active stories
stories = await user.get_stories()
for story in stories:
if story.media:
for media in story.media:
# Download media content
content = await media.download()
# Save to file
filename = f"stories/{media.filename}"
async with aiofiles.open(filename, "wb") as f:
await f.write(content)
print(f"Downloaded: {filename}")async with api.login_context(auth_json) as authed:
user = await authed.get_user("username")
# Fetch all posts with pagination
all_posts = []
offset = 0
limit = 50
while True:
posts = await user.get_posts(limit=limit, offset=offset)
if not posts:
break
all_posts.extend(posts)
offset += limit
print(f"Fetched {len(all_posts)} posts so far...")
print(f"Total posts: {len(all_posts)}")import asyncio
async with api.login_context(auth_json) as authed:
# Get multiple users concurrently
usernames = ["user1", "user2", "user3"]
tasks = [authed.get_user(username) for username in usernames]
users = await asyncio.gather(*tasks, return_exceptions=True)
for username, user in zip(usernames, users):
if isinstance(user, Exception):
print(f"Error fetching {username}: {user}")
else:
print(f"Fetched: {user.username} - {user.posts_count} posts")For more examples and patterns, see the Working with APIs Guide.
# Clone the repository
git clone https://github.com/UltimaHoarder/UltimaScraperAPI.git
cd UltimaScraperAPI
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
# Install in development mode with dev dependencies
uv pip install -e ".[dev]"
# Or with pip
pip install -e ".[dev]"# Run all tests
pytest
# Run with coverage report
pytest --cov=ultima_scraper_api --cov-report=html
# Run specific test file
pytest tests/test_onlyfans.py
# Run with verbose output
pytest -v# Format code with Black
black ultima_scraper_api/ tests/
# Check formatting without changing files
black --check ultima_scraper_api/
# Type checking (if using mypy)
mypy ultima_scraper_api/# Serve documentation locally with live reload
uv run mkdocs serve -a localhost:8001
# Open http://localhost:8001 in your browser
# Build static documentation site
uv run mkdocs build --clean
# Deploy to GitHub Pages
uv run mkdocs gh-deploy# Run all sessions (tests, linting, docs)
nox
# Run specific session
nox -s tests
nox -s lint
nox -s docsFor detailed contribution guidelines, see the Contributing Guide.
Contributions are welcome! Please read the Contributing Guide for details on:
- Code of conduct
- Development setup
- Submitting pull requests
- Writing tests
- Documentation standards
UltimaScraperAPI/
βββ ultima_scraper_api/ # Main package
β βββ apis/ # Platform-specific APIs
β β βββ onlyfans/ # OnlyFans implementation
β β βββ fansly/ # Fansly implementation (WIP)
β β βββ loyalfans/ # LoyalFans implementation (WIP)
β βββ classes/ # Utility classes
β βββ helpers/ # Helper functions
β βββ managers/ # Session/scrape managers
β βββ models/ # Data models
βββ documentation/ # MkDocs documentation
βββ tests/ # Test files
βββ typings/ # Type stubs
βββ pyproject.toml # Project configuration
This project is licensed under the GNU Affero General Public License v3.0 - see the LICENSE file for details.
- β You can use this commercially
- β You can modify the code
- β You can distribute it
β οΈ You must disclose source code when distributingβ οΈ You must use the same license for derivativesβ οΈ Network use requires source code disclosure
This software is provided for educational and research purposes. Users are responsible for complying with the terms of service of any platforms they interact with using this software.
Built with industry-leading open source libraries:
- aiohttp - Async HTTP client/server framework
- Pydantic - Data validation using Python type hints
- httpx - Modern HTTP client
- Redis - In-memory data structure store for caching
- websockets - WebSocket client and server
- MkDocs Material - Beautiful documentation site generator
- pytest - Testing framework
- Black - Code formatter
Special thanks to all contributors and the open source community!
- π Documentation - Comprehensive guides and API reference
- π Issue Tracker - Report bugs or request features
- π¬ Discussions - Ask questions and share ideas
- π¦ Releases - Version history and changelogs
If you encounter issues:
- Check the documentation first
- Search existing issues for similar problems
- Create a new issue with a detailed description and minimal reproduction example
- Join the discussions for community support
Made with β€οΈ by UltimaHoarder