Photon can extract the following data while crawling:
- URLs (in-scope & out-of-scope)
- URLs with parameters (
- Intel (emails, social media accounts, amazon buckets etc.)
- Files (pdf, png, xml etc.)
- Secret keys (auth/API keys & hashes)
- Strings matching custom regex pattern
- Subdomains & DNS related data
The extracted information is saved in an organized manner or can be exported as json.
Control timeout, delay, add seeds, exclude URLs matching a regex pattern and other cool stuff. The extensive range of options provided by Photon lets you crawl the web exactly the way you want.
Photon’s smart thread management & refined logic gives you top notch performance.
Still, crawling can be resource intensive but Photon has some tricks up it’s sleeves. You can fetch URLs archived by archive.org to be used as seeds by using
In Ninja Mode which can be accessed by
--ninja, 4 online services are used to make requests to the target on your behalf.
So basically, now you have 4 clients making requests to the same server simultaneously which gives you a speed boost if you have a slow connection, minimizes the risk of connection reset as well as delays requests from a single client.
Frequent & Seamless Updates
Photon is under heavy development and updates for fixing bugs. optimizing performance & new features are being rolled regularly.
If you would like to see features and issues that are being worked on, you can do that on Development project board.
Updates can be installed & checked for with the
--update option. Photon has seamless update capabilities which means you can update Photon without losing any of your saved data.
You can contribute in following ways:
- Report bugs
- Develop plugins
- Add more “APIs” for ninja mode
- Give suggestions to make it better
- Fix issues & submit a pull request