How To Use AI Deepfake Detector
📝 Step 1: Provide Media
Paste a public URL to an image/video or use the sample button to load test media.
⚙️ Step 2: Adjust Settings (Optional)
Choose media type (auto-detected if possible) and set detection sensitivity for speed vs. accuracy.
🔍 Step 3: Analyze
Click "Analyze Media". Our AI models inspect metadata, compression artifacts, and generative fingerprints.
📋 Step 4: Review Report
Get a confidence score, verdict (Real/Fake/Uncertain), and a list of detected anomalies. Copy or download the report.
💡 Pro Tips
- Use the Load Sample button to see a demo analysis.
- High sensitivity may produce false positives on heavily compressed media.
- Analysis checks for: GAN artifacts, inconsistent lighting, unnatural eye blinking, and metadata mismatches.
- For videos, key frames are sampled for efficiency.
🔍 Example Indicators
Inconsistent reflections
Unnatural skin texture
AI upscaling artifacts
Metadata anomalies
Frequently Asked Questions
What types of deepfakes can it detect? ▼
Our detector identifies common manipulation techniques including face-swaps (DeepFakes, FaceSwap), expression re-enactment (Face2Face), and entirely synthetic media generated by StyleGAN, Stable Diffusion, and DALL-E. It's optimized for faces but works on general scenes.
How accurate is the detection? ▼
The detector uses a ensemble of convolutional neural networks (CNNs) and vision transformers. In our tests, it achieves ~92% accuracy on known datasets (FaceForensics++, Celeb-DF). Accuracy depends on compression, resolution, and whether the deepfake method was seen during training.
Is my media secure and private? ▼
Absolutely. All analysis happens directly in your browser using client-side JavaScript and pre-loaded AI models (via TensorFlow.js). Your media files are never uploaded to our servers. You can even disconnect from the internet after the page loads.
What file formats are supported? ▼
Images: JPG, PNG, WEBP, GIF. Videos: MP4, WebM, OGG. Files are processed locally. For large videos, analysis may take a moment as we sample frames.
Why might I get an "Uncertain" verdict? ▼
An "Uncertain" result (confidence between 40-60%) can occur if the media is heavily compressed, has very low resolution, or contains mixed signals (e.g., a real background with a fake face). Try using "High" sensitivity for a more decisive, though potentially less accurate, result.
Can it detect audio deepfakes? ▼
Currently, this tool focuses on visual media (images and videos). Audio analysis requires different models and is planned for a future update.