# 🦊 Fox Cheat Sheet – Penetration Testing
[`渗透测试(或 PenTesting)`](https://en.wikipedia.org/wiki/Penetration_test) 是指对计算机系统及其物理基础设施发起授权模拟攻击,以发现潜在的安全弱点和漏洞。此类模拟攻击的目的是识别系统中任何可能被攻击者利用的薄弱环节。这就像银行雇佣某人伪装成窃贼,试图闯入其建筑并进入金库。如果“窃贼”成功进入银行或金库,银行就能获得宝贵的信息,了解需要如何加强安全措施。如果您发现漏洞,请遵循[此指导](https://kb.cert.org/vuls/guidance/)以负责任的方式报告。
编写报告时应使用的网站:
* [SysReptor Github](https://github.com/Syslifters/sysreptor) 或 [SysReptor](https://labs.sysre.pt/)
* [attack.mitre.org](https://attack.mitre.org)
* [cwe.mitre.org/data](https://cwe.mitre.org/data)
* [first.org/cvss/calculator/4.0](https://www.first.org/cvss/calculator/4.0)
* [nvd.nist.gov/ncp/repository](https://nvd.nist.gov/ncp/repository)
* [owasp.org/www-project-top-ten](https://owasp.org/www-project-top-ten)
* [cheatsheetseries.owasp.org](https://cheatsheetseries.owasp.org/Glossary.html)
## 概述
- [🦊 Fox Cheat Sheet – Penetration Testing](#-fox-cheat-sheet--penetration-testing)
- [Overview](#overview)
- [Fox Tips and Tricks](#fox-tips-and-tricks)
- [0. Install and Setup Tools](#0-install-and-setup-tools)
- [API Keys](#api-keys)
- [User-Agents](#user-agents)
- [DNS Resolvers](#dns-resolvers)
- [ProxyChains-NG](#proxychains-ng)
- [1. Reconnaissance](#1-reconnaissance)
- [1.1 Useful Websites](#11-useful-websites)
- [FOCA (Fingerprinting Organizations with Collected Archives)](#foca-fingerprinting-organizations-with-collected-archives)
- [DNS](#dns)
- [ASNmap](#asnmap)
- [dig](#dig)
- [DNSenum](#dnsenum)
- [DNSmap](#dnsmap)
- [DNSRecon](#dnsrecon)
- [Fierce](#fierce)
- [host](#host)
- [nslookup](#nslookup)
- [Nmap Enumaration](#nmap-enumaration)
- [WHOIS](#whois)
- [Amass](#amass)
- [assetfinder](#assetfinder)
- [Sublist3r](#sublist3r)
- [Subfinder](#subfinder)
- [httpx](#httpx)
- [gau](#gau)
- [urlhunter](#urlhunter)
- [wfuzz](#wfuzz)
- [Directory Fuzzing](#directory-fuzzing)
- [dirb](#dirb)
- [DirBuster](#dirbuster)
- [Dirsearch](#dirsearch)
- [feroxbuster](#feroxbuster)
- [ffuf](#ffuf)
- [gobuster](#gobuster)
- [Google Dorks](#google-dorks)
- [Chad](#chad)
- [PhoneInfoga](#phoneinfoga)
- [git-dumper](#git-dumper)
- [TruffleHog](#trufflehog)
- [katana](#katana)
- [Scrapy Scraper](#scrapy-scraper)
- [snallygaster](#snallygaster)
- [IIS Tilde Short name Scanning](#iis-tilde-short-name-scanning)
- [WhatWeb](#whatweb)
- [Parsero](#parsero)
- [EyeWitness](#eyewitness)
- [Wordlists](#wordlists)
- [2. Scanning/Enumeration](#2-scanningenumeration)
- [2.1 Useful Websites](#21-useful-websites)
- [masscan](#masscan)
- [rustscan](#rustscan)
- [Nmap](#nmap)
- [NetExec](#netexec)
- [NFS](#nfs)
- [Samba](#samba)
- [SNMP](#snmp)
- [testssl.sh](#testsslsh)
- [OpenSSL](#openssl)
- [keytool](#keytool)
- [uncover](#uncover)
- [Databases](#databases)
- [MYSQL](#mysql)
- [MSSQL](#mssql)
- [PostgreSQL](#postgresql)
- [sqlite](#sqlite)
- [Windows OS Enumeration](#windows-os-enumeration)
- [Windows Basic Commands](#windows-basic-commands)
- [nbtstat](#nbtstat)
- [winfo](#winfo)
- [nbtscan](#nbtscan)
- [smblcient](#smblcient)
- [rpcclient](#rpcclient)
- [enum4linux](#enum4linux)
- [3. Vulnerability Assesment/Exploiting](#3-vulnerability-assesmentexploiting)
- [3.1 Useful Websites](#31-useful-websites)
- [Collaborator Servers](#collaborator-servers)
- [Subdomain Takeover](#subdomain-takeover)
- [Search Exploits and Scanners](#search-exploits-and-scanners)
- [Subzy](#subzy)
- [subjack](#subjack)
- [Nikto](#nikto)
- [WPScan](#wpscan)
- [Joomla](#joomla)
- [Nuclei](#nuclei)
- [Arjun](#arjun)
- [Insecure Direct Object Reference (IDOR)](#insecure-direct-object-reference-idor)
- [HTTP Response Splitting](#http-response-splitting)
- [Cross-Site Scripting (XSS)](#cross-site-scripting-xss)
- [SQL Injection](#sql-injection)
- [sqlmap](#sqlmap)
- [dotdotpwn](#dotdotpwn)
- [Web Shells](#web-shells)
- [Send a Payload With Python](#send-a-payload-with-python)
- [SMTP](#smtp)
- [4. Post Exploitation](#4-post-exploitation)
- [4.1 Useful Websites](#41-useful-websites)
- [Generate a Reverse Shell Payload](#generate-a-reverse-shell-payload)
- [Generate a Reverse Shell Payload via MSFVenom](#generate-a-reverse-shell-payload-via-msfvenom)
- [PowerShell Encoded Command](#powershell-encoded-command)
- [Basics](#basics)
- [Stabilizing Linux Shell](#stabilizing-linux-shell)
- [Port Forwarding](#port-forwarding)
- [SSH Port Forwarding#ssh-port-forwarding)
- [sshuttle](#sshuttle)
- [chisel](#chisel)
- [socat](#socat)
- [Netcat Portfwd](#netcat-portfwd)
- [Meterpreter Portfwd](#meterpreter-portfwd)
- [ligolo-ng](#ligolo-ng)
- [Transfering Files Windows](#transfering-files-windows)
- [Transfering Files Linux](#transfering-files-linux)
- [Exfiltrating Data](#exfiltrating-data)
- [Linux Exfiltrating Data](#linux-exfiltrating-data)
- [SSH Exfiltrating Data](#ssh-exfiltrating-data)
- [Windows Exfiltrating Data](#windows-exfiltrating-data)
- [Active Directory and Windows Lateral Movement](#active-directory-and-windows-lateral-movement)
- [ASREPRoast](#asreproast)
- [bloodyAD](#bloodyad)
- [Bloodhound](#bloodhound)
- [CrackMapExec](#crackmapexec)
- [DCSync](#dcsync)
- [dcom-exec](#dcom-exec)
- [Decode Password](#decode-password)
- [Evil-WinRM](#evil-winrm)
- [kerbrute](#kerbrute)
- [ntpdate](#ntpdate)
- [powerview](#powerview)
- [psexec](#psexec)
- [Rubeus](#rubeus)
- [RunasCs](#runascs)
- [smbexec](#smbexec)
- [wmiexec.py](#wmiexecpy)
- [Linux Lateral Movement](#linux-lateral-movement)
- [Linux Search Non-Secure Files](#linux-search-non-secure-files)
- [Unsafe Bash](#unsafe-bash)
- [base64](#base64)
- [Powershell ToBase64String and Linux Base64](#powershell-tobase64string-and-linux-base64)
- [5. Password Cracking](#5-password-cracking)
- [5.1 Useful Websites](#51-useful-websites)
- [crunch](#crunch)
- [hash-identifier](#hash-identifier)
- [Hashcat](#hashcat)
- [Cracking the JWT](#cracking-the-jwt)
- [Hydra](#hydra)
- [John the Ripper](#john-the-ripper)
- [Password Spraying](#password-spraying)
- [6. Wi-Fi](#6-wi-fi)
- [Pixie Dust](#pixie-dust)
- [7. One-Liners for Bug Bounty](#7-one-liners-for-bug-bounty)
- [8. Miscellaneous](#8-miscellaneous)
- [8.1 Useful Websites](#81-useful-websites)
- [cURL](#curl)
- [Ncat](#ncat)
- [Port Scanner](#port-scanner)
- [Send Files](#send-files)
- [Executing Remote Script](#executing-remote-script)
- [Chat with Encryption](#chat-with-encryption)
- [Banner Grabbing](#banner-grabbing)
- [HTTPS-OpenSSL](#https-openssl)
- [Catch Shell](#catch-shell)
- [multi/handler](#multihandler)
- [ngrok](#ngrok)
- [Simple Web-Server](#simple-web-server)
- [SSH](#ssh)
- [swaks](#swaks)
- [xfreerdp](#xfreerdp)
- [Additional References](#additional-references)
## Fox Tips and Tricks
Hello there! Don't miss out – let's check it out now 🦊
» All suggestions are welcome «
## 0. Install and Setup Tools
**[`^ back to top ^`](#overview)**
Most tools can be installed with the Linux package manager:
```
apt update && apt -y install sometool
```
For more information see [kali.org/tools](https://www.kali.org/tools).
Some Python tools need to be downloaded and installed manually:
```
python3 setup.py install
```
Or, installed from the [pipx](https://pipx.pypa.io/):
```
# https://github.com/pypa/pipx
python3 -m pip install --user pipx
python3 -m pipx ensurepath
# Install an application globally:
pipx install pycowsay
pycowsay mooo
# Run an application without installing:
pipx run pycowsay moo
```
Some Golang tools need to be downloaded and built manually:
```
go build sometool.go
```
Or, installed directly:
```
go install -v [github.com/user/sometool@latest](https://github.com/user/sometool@latest)
```
For more information see [pkg.go.dev](https://pkg.go.dev).
To set up Golang, run:
```
apt -y install golang
echo "export GOROOT=/usr/lib/go" >> ~/.zshrc
echo "export GOPATH=$HOME/go" >> ~/.zshrc
echo "export PATH=$GOPATH/bin:$GOROOT/bin:$PATH" >> ~/.zshrc
source ~/.zshrc
```
If you use other console, you might need to write to `~/.bashrc`, etc.
Some tools, that are in the form of binaries or shell scripts, can be moved to `/usr/bin/` directory for the ease of use:
```
mv sometool.sh /usr/bin/sometool && chmod +x /usr/bin/sometool
```
Some Java tools need to be downloaded and ran manually with Java (JRE):
```
java -jar sometool.jar
```
### API Keys
**[`^ back to top ^`](#overview)**
List of useful APIs to integrate in your tools:
* [shodan.io](https://developer.shodan.io) – IoT search engine and more.
* [censys.io](https://search.censys.io/api) – domain lookup and more.
* [github.com](https://github.com/settings/tokens) – public source code repository lookup.
* [virustotal.com](https://developers.virustotal.com/reference/overview) – malware database lookup.
* [cloud.projectdiscovery.io](https://cloud.projectdiscovery.io) – ProjectDiscovery tools
### User-Agents
**[`^ back to top ^`](#overview)**
Download a list of bot-safe User-Agents, requires [scrapeops.io](https://scrapeops.io) API key:
```
python3 -c 'import json, requests; open("./user_agents.txt", "w").write(("\n").join(requests.get("[http://headers.scrapeops.io/v1/user-agents?api_key=SCRAPEOPS_API_KEY&num_results=100](http://headers.scrapeops.io/v1/user-agents?api_key=SCRAPEOPS_API_KEY&num_results=100)", verify = False).json()["result"]))'
```
### DNS Resolvers
**[`^ back to top ^`](#overview)**
Download a list of trusted DNS resolvers, or manually from [trickest/resolvers](https://github.com/trickest/resolvers):
```
python3 -c 'import json, requests; open("./resolvers.txt", "w").write(requests.get("[https://raw.githubusercontent.com/trickest/resolvers/main/resolvers-trusted.txt](https://raw.githubusercontent.com/trickest/resolvers/main/resolvers-trusted.txt)", verify = False).text)'
```
### ProxyChains-NG
**[`^ back to top ^`](#overview)**
If Google or any other search engine or service blocks your tool, use ProxyChains-NG and Tor to bypass the restriction.
Installation:
```
apt update && apt -y install proxychains4 tor torbrowser-launcher
```
Do the following changes in `/etc/proxychains4.conf`:
```
round_robin
chain_len = 1
proxy_dns
remote_dns_subnet 224
tcp_read_time_out 15000
tcp_connect_time_out 8000
[ProxyList]
socks5 127.0.0.1 9050
```
Make sure to comment any chain type other than `round_robin` – e.g., comment `strict_chain` into `# strict_chain`.
Start Tor:
```
service tor start
```
Then, run any tool you want:
```
proxychains4 sometool
```
Using only Tor most likely won't be enough, you will need to add more proxies \([1](https://geonode.com/free-proxy-list)\)\([2](https://proxyscrape.com/home)\) to `/etc/proxychains4.conf`; however, it is hard to free and stable proxies that are not already blacklisted.
Download a list of free proxies:
```
curl -s '[https://proxylist.geonode.com/api/proxy-list?limit=50&page=1&sort_by=lastChecked&sort_type=desc](https://proxylist.geonode.com/api/proxy-list?limit=50&page=1&sort_by=lastChecked&sort_type=desc)' -H 'Referer: [https://proxylist.geonode.com/](https://proxylist.geonode.com/)' | jq -r '.data[] | "\(.protocols[]) \(.ip) \(.port)"' > proxychains.txt
curl -s '[https://proxylist.geonode.com/api/proxy-list?limit=50&page=1&sort_by=lastChecked&sort_type=desc](https://proxylist.geonode.com/api/proxy-list?limit=50&page=1&sort_by=lastChecked&sort_type=desc)' -H 'Referer: [https://proxylist.geonode.com/](https://proxylist.geonode.com/)' | jq -r '.data[] | "\(.protocols[])://\(.ip):\(.port)"' > proxies.txt
```
## 1. Reconnaissance
**[`^ back to top ^`](#overview)**
### 1.1 Useful Websites
**[`^ back to top ^`](#overview)**
**Domain, IP & Network Reconnaissance**
* [whois.domaintools.com](https://whois.domaintools.com) – domain WHOIS lookup.
* [dnsdumpster.com](https://dnsdumpster.com/) – DNS recon and research.
* [network-tools.com](https://network-tools.com/nslook/) – network troubleshooting tools.
* [dnsqueries.com](https://www.dnsqueries.com/en/) – DNS diagnostic tools.
* [mxtoolbox.com](https://mxtoolbox.com/) – DNS, SMTP, and blacklist checks.
* [otx.alienvault.com](https://otx.alienvault.com) – domain lookup.
* [reverseip.domaintools.com](https://reverseip.domaintools.com) – web-based reverse IP lookup.
* [lookup.icann.org](https://lookup.icann.org) – ICANN registration data lookup.
* [sitereport.netcraft.com](https://sitereport.netcraft.com) – website infrastructure profiling.
* [searchdns.netcraft.com](https://searchdns.netcraft.com) – web-based DNS lookup.
* [search.censys.io](https://search.censys.io) – domain lookup and more.
* [crt.sh](https://crt.sh) – certificate fingerprinting.
* [radar.cloudflare.com](https://radar.cloudflare.com) – website lookup and more.
* [dnschecker.org](https://dnschecker.org/) – global DNS propagation check.
* [haveibeensquatted.com](https://haveibeensquatted.com/) – check for typosquatting domains.
* [ifconfig.io](https://ifconfig.io/) – network info.
* [abuseipdb.com](https://www.abuseipdb.com/) – check IP reputation and threat intelligence.
* [ipvoid.com](https://www.ipvoid.com/) – IP blacklists and tools.
* [myip.ms](https://myip.ms/) – IP address information.
* [search.arin.net](https://search.arin.net/) – ARIN IP/ASN search.
* [macaddress.io](https://macaddress.io/) – MAC address vendor lookup.
* [iknowwhatyoudownload.com](https://iknowwhatyoudownload.com/en/peer/) – discover torrent downloads by IP.
* [opencellid.org](https://opencellid.org/) – cell tower locations for OSINT.
**OSINT Frameworks & Search**
* [commoncrawl.org](https://commoncrawl.org/get-started) – web crawl dumps.
* [searchcode.com](https://searchcode.com) – source code search engine.
* [archive.org](https://archive.org) – wayback machine.
* [shodan.io](https://www.shodan.io) – IoT search engine.
* [whoisds.com](https://www.whoisds.com/newly-registered-domains) – newly registered domains.
* [osintframework.com](https://osintframework.com/) – huge collection of OSINT tools.
* [nitinpandey.in/ihunt](https://nitinpandey.in/ihunt/) – complete OSINT framework.
* [abhijithb200.github.io/investigator](https://abhijithb200.github.io/investigator/) – OSINT tools aggregator.
* [cybersec.org/search/index.php](https://cybersec.org/search/index.php) – specialized cybersecurity search engine.
* [extract.pics](https://extract.pics/) – extract images from websites.
**People, Accounts & Breaches**
* [haveibeenpwned.com](https://haveibeenpwned.com) – check if email/phone was compromised in a breach.
* [haveibeenpwned.com/Passwords](https://haveibeenpwned.com/Passwords) – Pwned passwords lookup.
* [intelx.io](https://intelx.io) – database breaches.
* [search.wikileaks.org](https://search.wikileaks.org) – WikiLeaks document search.
* [pgp.circl.lu](https://pgp.circl.lu) – OpenPGP key server.
* [sherlockeye.io](https://sherlockeye.io) – account lookup.
* [whatsmyname.app](https://whatsmyname.app/) – username enumeration across sites.
* [usersearch.ai](https://usersearch.ai/) – username OSINT.
* [sec.hpi.de/ilc/](https://sec.hpi.de/ilc/?) – identity leak checker.
* [bugmenot.com](http://bugmenot.com/) – bypass forced logins with shared accounts.
**Malware Analysis & Threat Intel (Sandboxes)**
* [opendata.rapid7.com](https://opendata.rapid7.com) – scan dumps.
* [virustotal.com](https://www.virustotal.com/gui/home/search) – malware database lookup.
* [virusscan.jotti.org](https://virusscan.jotti.org/en-US/scan-file) – free online malware scanner.
* [filescan.io](https://www.filescan.io/scan) – next-gen malware analysis platform.
* [virscan.org](https://www.virscan.org/) – multi-engine file scanner.
* [docguard.io](https://www.docguard.io/) – document malware analysis.
* [tria.ge](https://tria.ge/login) – malware analysis sandbox.
* [filesec.io](https://filesec.io/) – latest file extension security intel.
* [threatfox.abuse.ch](https://threatfox.abuse.ch/browse/) – share and search indicators of compromise (IOCs).
* [urlscan.io](https://urlscan.io/) – sandbox for URLs.
* [url2png.com](https://www.url2png.com/) – secure website screenshots.
* [phishtank.org](https://phishtank.org/) – phishing URL database.
**[`^ back to top ^`](#overview)**
### FOCA (Fingerprinting Organizations with Collected Archives)
[`FOCA (Fingerprinting Organizations with Collected Archives)`](https://github.com/ElevenPaths/FOCA) – Find metadata and hidden information in files.
Minimum requirements:
* Microsoft Windows (64 bits). Versions 7, 8, 8.1 and 10.
* Download and install [MS SQL Server 2014 Express](https://www.microsoft.com/en-us/download/details.aspx?id=42299) or greater.
* Download and install [MS .NET Framework 4.7.1 Runtime](https://dotnet.microsoft.com/download/dotnet-framework/net471) or greater.
* Download and install [MS Visual C++ 2010 (64-bit)](https://www.microsoft.com/en-us/download/developer-tools.aspx) or greater.
* Download and install [FOCA](https://github.com/ElevenPaths/FOCA/releases).
GUI is very intuitive.
### DNS
**[`^ back to top ^`](#overview)**
#### ASNmap
**[`^ back to top ^`](#overview)**
Installation:
```
go install -v [github.com/projectdiscovery/asnmap/cmd/asnmap@latest](https://github.com/projectdiscovery/asnmap/cmd/asnmap@latest)
```
Get the ProjectDiscovery API key from [cloud.projectdiscovery.io](https://cloud.projectdiscovery.io) and run:
```
asnmap -auth
```
Fetch ASN for IP:
```
asnmap --silent -r resolvers.txt -i ip | tee -a asnmap_asn_results.txt
```
Fetch CIDRs for ASN:
```
asnmap --silent -r resolvers.txt -a asn | tee -a asnmap_cidr_results.txt
```
**If ASN belongs to a cloud provider, you will get a lot of CIDRs / IPs, which might not be all within your scope!**
Fetch CIDRs for organization ID:
```
asnmap --silent -r resolvers.txt -org id | tee -a asnmap_cidr_results.txt
```
#### dig
**[`^ back to top ^`](#overview)**
Fetch name servers:
```
dig +noall +answer -t NS somedomain.com
```
Fetch mail exchange servers:
Interrogate a name server:
```
dig +noall +answer -t ANY somedomain.com @ns.somedomain.com
dig any DOMAIN @IP_OR_DOMAIN
```
Fetch the zone file from a name server:
```
dig +noall +answer -t AXFR somedomain.com @ns.somedomain.com
dig axfr DOMAIN @IP_OR_DOMAIN
```
After that test nameservers:
`host -l < domain > < nameserver >`
```
host -l domain.com ns2.domain.com
```
Reverse IP lookup:
```
dig +noall +answer -x 192.168.8.5
```
Subdomain Takeover Check if subdomains are dead, look for `NXDOMAIN`, `SERVFAIL`, or `REFUSED` status codes:
```
for subdomain in $(cat subdomains.txt); do res=$(dig "${subdomain}" -t A +noall +comments +timeout=3 | grep -Po '(?<=status\:\ )[^\s]+(?
```
Brute Force, the file is saved in `/tmp`:
```
dnsmap targetdomain.com -r
```
#### DNSRecon
**[`^ back to top ^`](#overview)**
DNSRecon DNS Brute Force:
```
dnsrecon -d TARGET -D /usr/share/wordlists/dnsmap.txt -t std --xml ouput.xml
```
Interrogate name servers:
```
dnsrecon -t std --json /root/Desktop/dnsrecon_std_results.json -d somedomain.com
dnsrecon -t axfr --json /root/Desktop/dnsrecon_axfr_results.json -d somedomain.com
dnsrecon --iw -f --threads 50 --lifetime 3 -t brt --json /root/Desktop/dnsrecon_brt_results.json -D subdomains-top1mil.txt -d somedomain.com
```
DNSRecon can perform a dictionary attack with a user-defined wordlist, but make sure to specify a full path to the wordlist; otherwise, DNSRecon might not recognize it.
Make sure to specify a full path to the output file; otherwise, it will default to `/usr/share/dnsrecon/` directory, i.e., to the root directory.
Extract subdomains from the results:
```
jq -r '.[] | select(.type | test("^A$|^CNAME$|^SRV$")) | .name // empty, .target // empty' dnsrecon_std_results.json | sort -uf | tee -a subdomains.txt
```
Extract IPs from the results:
```
jq -r '.[] | select(.type | test("^A$|^CNAME$|^PTR$")) | .address // empty' dnsrecon_std_results.json | sort -uf | tee -a ips.txt
```
Extract canonical names (CNAMEs) from the results:
```
jq -r '.[] | select(.type | test("^CNAME$")) | .target // empty' dnsrecon_std_results.json | sort -uf | tee -a cnames.txt
```
Reverse IP lookup:
```
dnsrecon --json /root/Desktop/dnsrecon_ptr_results.json -s -r 192.168.8.0/24
```
Extract subdomains from the reverse IP lookup results:
```
jq -r '.[] | if type == "array" then .[].name else empty end' dnsrecon_ptr_results.json | sort -uf | tee -a subdomains.txt
```
#### Fierce
**[`^ back to top ^`](#overview)**
Interrogate name servers:
```
fierce -dns targetdomain.com
fierce -file fierce_std_results.txt --domain somedomain.com
fierce -file fierce_brt_results.txt --subdomain-file subdomains-top1mil.txt --domain somedomain.com
```
**By default, Fierce will perform dictionary attack with its built-in wordlist.**
#### host
**[`^ back to top ^`](#overview)**
**Some DNS servers will not respond to DNS quieries of type 'ANY', use type 'A' instead.**
Gather IPs for the given subdomains (ask for `A` records):
```
for subdomain in $(cat subdomains.txt); do res=$(host -t A "${subdomain}" | grep -Po '(?<=has\ address\ )[^\s]+(?
host -t ns domain.com
```
After that test nameservers:
```
host -l < domain > < nameserver >
host -l domain.com ns2.domain.com
```
#### Nmap Enumaration
**[`^ back to top ^`](#overview)**
```
nmap -F --dns-server
```
#### WHOIS
**[`^ back to top ^`](#overview)**
Gather ASNs from IPs:
```
for ip in $(cat ips.txt); do res=$(whois -h whois.cymru.com "${ip}" | grep -Poi '^\d+'); if [[ ! -z $res ]]; then echo "${ip} | ${res//$'\n'/ | }"; fi; done | sort -uf | tee -a ips_to_asns.txt
grep -Po '(?<=\|\ )(?(?!\ \|).)+' ips_to_asns.txt | sort -uf | tee -a asns.txt
```
**If ASN belongs to a cloud provider, you will get a lot of CIDRs / IPs, which might not be all within your scope!**
Gather organization names from IPs:
```
for ip in $(cat ips.txt); do res=$(whois -h whois.arin.net "${ip}" | grep -Po '(?<=OrgName\:)[\s]+\K.+'); if [[ ! -z $res ]]; then echo "${ip} | ${res//$'\n'/ | }"; fi; done | sort -uf | tee -a ips_to_organization_names.txt
grep -Po '(?<=\|\ )(?(?!\ \|).)+' ips_to_organization_names.txt | sort -uf | tee -a organization_names.txt
```
Check if any of the IPs belong to [GitHub](https://github.com) organization, read more about GitHub takeover in this [H1 article](https://www.hackerone.com/application-security/guide-subdomain-takeovers).
### Amass
**[`^ back to top ^`](#overview)**
Gather subdomains using OSINT:
```
amass enum -o amass_results.txt -trf resolvers.txt -d somedomain.com
```
**Amass has built-in DNS resolvers.**
Extract IPs from the results:
```
grep '(?<=(?:a_record|contains)\ \-\-\>\ )[^\s]+' amass_results.txt | sort -uf | tee -a ips.txt
```
Extract subdomains from the results:
```
grep '^[^\s]+(?=\ \(FQDN\))|(?<=ptr_record\ \-\-\>\ )[^\s]+' amass_results.txt | sort -uf | tee -a subdomains.txt
```
Extract canonical names (CNAMEs) from the results:
```
grep '(?<=(?:a_record|contains)\ \-\-\>\ )[^\s]+' amass_results.txt | sort -uf | tee -a cnames.txt
```
The below ASN and CIDR scans will take a long time to finish.
**If ASN belongs to a cloud provider, you will get a lot of CIDRs / IPs, which might not be all within your scope!**
Gather subdomains from ASN:
```
amass intel -o amass_asn_results.txt -trf resolvers.txt -asn 13337
```
Gather subdomains from CIDR:
```
amass intel -o amass_cidr_results.txt -trf resolvers.txt -cidr 192.168.8.0/24
```
### assetfinder
**[`^ back to top ^`](#overview)**
Gather subdomains using OSINT:
```
assetfinder --subs-only somedomain.com | grep -v '*' | tee assetfinder_results.txt
```
### Sublist3r
**[`^ back to top ^`](#overview)**
Gather subdomains using OSINT:
```
sublist3r -o sublister_results.txt -d somedomain.com
```
### Subfinder
**[`^ back to top ^`](#httpx)**
Installation:
```
go install -v [github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest](https://github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest)
```
Gather subdomains using OSINT:
```
subfinder -t 10 -timeout 3 -nW -o subfinder_results.txt -rL resolvers.txt -d somedomain.com
```
**Subfinder has built-in DNS resolvers.**
Set your API keys in `/root/.config/subfinder/provider-config.yaml` file as following:
```
shodan:
- SHODAN_API_KEY
censys:
- CENSYS_API_ID:CENSYS_API_SECRET
github:
- GITHUB_API_KEY
virustotal:
- VIRUSTOTAL_API_KEY
```
### httpx
**[`^ back to top ^`](#overview)**
Check if subdomains are alive, map live hosts:
```
httpx-toolkit -o httpx_results.txt -l subdomains.txt
httpx-toolkit -random-agent -json -o httpx_results.json -threads 100 -timeout 3 -l subdomains.txt -ports 80,81,443,4443,8000,8008,8080,8081,8403,8443,8888,9000,9008,9080,9081,9403,9443
```
Filter out subdomains from the JSON results:
```
jq -r 'select(."status_code" | tostring | test("^2|^3|^4")).url' httpx_results.json | sort -uf | tee -a subdomains_live_long.txt
jq -r 'select(."status_code" | tostring | test("^2")).url' httpx_results.json | sort -uf | tee -a subdomains_live_long_2xx.txt
jq -r 'select(."status_code" | tostring | test("^2|^4")).url' httpx_results.json | sort -uf | tee -a subdomains_live_long_2xx_4xx.txt
jq -r 'select(."status_code" | tostring | test("^3")).url' httpx_results.json | sort -uf | tee -a subdomains_live_long_3xx.txt
jq -r 'select(."status_code" | tostring | test("^401$")).url' httpx_results.json | sort -uf | tee -a subdomains_live_long_401.txt
jq -r 'select(."status_code" | tostring | test("^403$")).url' httpx_results.json | sort -uf | tee -a subdomains_live_long_403.txt
jq -r 'select(."status_code" | tostring | test("^4")).url' httpx_results.json | sort -uf | tee -a subdomains_live_long_4xx.txt
jq -r 'select(."status_code" | tostring | test("^5")).url' httpx_results.json | sort -uf | tee -a subdomains_live_long_5xx.txt
grep -Po 'http\:\/\/[^\s]+' subdomains_live_long.txt | sort -uf | tee -a subdomains_live_long_http.txt
grep -Po 'https\:\/\/[^\s]+' subdomains_live_long.txt | sort -uf | tee -a subdomains_live_long_https.txt
grep -Po '(?<=\:\/\/)[^\s]+' subdomains_live_long.txt | sort -uf | tee -a subdomains_live_short.txt
grep -Po '(?<=http\:\/\/)[^\s]+' subdomains_live_long.txt | sort -uf | tee -a subdomains_live_short_http.txt
grep -Po '(?<=https\:\/\/)[^\s]+' subdomains_live_long.txt | sort -uf | tee -a subdomains_live_short_https.txt
grep -Po '(?<=\:\/\/)[^\s\:]+' subdomains_live_long.txt | sort -uf | tee -a subdomains_live.txt
```
Check if a path exists on a web server:
```
httpx-toolkit -status-code -content-length -o httpx_results.txt -l subdomains_live_long.txt -path /.git
```
### gau
**[`^ back to top ^`](#overview)**
Gather URLs from the [wayback machine](https://archive.org):
```
getallurls somedomain.com | tee gau_results.txt
for subdomain in $(cat subdomains_live.txt); do getallurls "${subdomain}"; done | sort -uf | tee gau_results.txt
```
Filter out URLs from the results:
```
httpx-toolkit -random-agent -json -o httpx_gau_results.json -threads 100 -timeout 3 -r resolvers.txt -l gau_results.txt
jq -r 'select(."status_code" | tostring | test("^2")).url' httpx_gau_results.json | sort -uf | tee -a gau_2xx_results.txt
jq -r 'select(."status_code" | tostring | test("^2|^4")).url' httpx_gau_results.json | sort -uf | tee -a gau_2xx_4xx_results.txt
jq -r 'select(."status_code" | tostring | test("^3")).url' httpx_gau_results.json | sort -uf | tee -a gau_3xx_results.txt
jq -r 'select(."status_code" | tostring | test("^401$")).url' httpx_gau_results.json | sort -uf | tee -a gau_401_results.txt
jq -r 'select(."status_code" | tostring | test("^403$")).url' httpx_gau_results.json | sort -uf | tee -a gau_403_results.txt
```
### urlhunter
**[`^ back to top ^`](#overview)**
Installation:
```
go install -v [github.com/utkusen/urlhunter@latest](https://github.com/utkusen/urlhunter@latest)
```
Gather URLs from URL shortening services:
```
urlhunter -o urlhunter_results.txt -date latest -keywords subdomains_live.txt
```
### wfuzz
**[`^ back to top ^`](#overview)**
Wfuzz has been created to facilitate the task in web applications assessments and it is based on a simple concept: it replaces any reference to the `FUZZ` keyword by the value of a given payload.
```
pipx install wfuzz
```
Let's search the subdomains with wfuzz:
```
wfuzz -c -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt -u "" -H "Host: FUZZ." --hl 7
wfuzz -c -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-20000.txt --hc 400,404,403 -H "Host: FUZZ." -u http:// -t 100
wfuzz -H "Host: FUZZ." --hw 11 -c -z file,"/usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt" http:///
```
Fuzz directories:
```
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X GET -u [https://somesite.com/WFUZZ](https://somesite.com/WFUZZ) -w directory-list-lowercase-2.3-medium.txt
```
Let's search the directories and files with wfuzz:
```
wfuzz -c -z file,/usr/share/seclists/Discovery/Web-Content/directory-list-2.3-small.txt --sc 200,202,204,301,302,307,403 http:///FUZZ
```
Login Form bruteforce. POST, Single list, filter string (hide):
```
wfuzz -c -w users.txt --hs "Login name" -d "name=FUZZ&password=FUZZ&autologin=1&enter=Sign+in" http:///zabbix/index.php
#Here we have filtered by line
```
Login Form bruteforce. POST, 2 lists, filter code (show):
```
wfuzz.py -c -z file,users.txt -z file,pass.txt --sc 200 -d "name=FUZZ&password=FUZ2Z&autologin=1&enter=Sign+in" http:///zabbix/index.php
#Here we have filtered by code
```
Login Form bruteforce. GET, 2 lists, filter stringshow), proxy, cookies:
```
wfuzz -c -w users.txt -w pass.txt --ss "Welcome " -p 127.0.0.1:8080:HTTP -b "PHPSESSIONID=1234567890abcdef;customcookie=hey" "[http://example.com/index.php?username=FUZZ&password=FUZ2Z&action=sign+in](http://example.com/index.php?username=FUZZ&password=FUZ2Z&action=sign+in)"
```
Cookie/Header bruteforce (vhost brute). Cookie, filter code (show), proxy:
```
wfuzz -c -w users.txt -p 127.0.0.1:8080:HTTP --ss "Welcome " -H "Cookie:id=1312321&user=FUZZ" "[http://example.com/index.php](http://example.com/index.php)"
```
Cookie/Header bruteforce (vhost brute). User-Agent, filter code (hide), proxy:
```
wfuzz -c -w user-agents.txt -p 127.0.0.1:8080:HTTP --ss "Welcome " -H "User-Agent: FUZZ" "[http://example.com/index.php](http://example.com/index.php)"
```
Fuzz parameter values:
```
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X GET -u "[https://somesite.com/someapi?someparam=WFUZZ](https://somesite.com/someapi?someparam=WFUZZ)" -w somewordlist.txt
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X POST -H "Content-Type: application/x-www-form-urlencoded" -u "[https://somesite.com/someapi](https://somesite.com/someapi)" -d "someparam=WFUZZ" -w somewordlist.txt
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X POST -H "Content-Type: application/json" -u "[https://somesite.com/someapi](https://somesite.com/someapi)" -d "{\"someparam\": \"WFUZZ\"}" -w somewordlist.txt
```
Fuzz parameters:
```
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X GET -u "[https://somesite.com/someapi?WFUZZ=somevalue](https://somesite.com/someapi?WFUZZ=somevalue)" -w somewordlist.txt
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X POST -H "Content-Type: application/x-www-form-urlencoded" -u "[https://somesite.com/someapi](https://somesite.com/someapi)" -d "WFUZZ=somevalue" -w somewordlist.txt
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X POST -H "Content-Type: application/json" -u "[https://somesite.com/someapi](https://somesite.com/someapi)" -d "{\"WFUZZ\": \"somevalue\"}" -w somewordlist.txt
```
Additional example, internal SSRF fuzzing:
```
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X GET -u "[https://somesite.com/someapi?url=127.0.0.1:WFUZZ](https://somesite.com/someapi?url=127.0.0.1:WFUZZ)" -w ports.txt
wfuzz -t 30 -f wfuzz_results.txt --hc 404,405 -X GET -u "[https://somesite.com/someapi?url=WFUZZ:80](https://somesite.com/someapi?url=WFUZZ:80)" -w ips.txt
```
| Option | Description |
| --- | --- |
| -f | Store results in the output file |
| -t | Specify the number of concurrent connections (10 default) |
| -s | Specify time delay between requests (0 default) |
| -u | Specify a URL for the request |
| -w | Specify a wordlist file |
| -X | Specify an HTTP method for the request, i.e., HEAD or FUZZ |
| -b | Specify a cookie for the requests |
| -d | Use post data |
| -H | Use header |
| --hc/--hl/--hw/--hh | Hide responses with the specified code/lines/words/chars |
| --sc/--sl/--sw/--sh| Show responses with the specified code/lines/words/chars |
| --ss/--hs| Show/hide responses with the specified regex within the content |
### Directory Fuzzing
**[`^ back to top ^`](#overview)**
**Don't forget that GNU/Linux OS has a case sensitive file system, so make sure to use the right wordlists.**
If you don't get any hits while brute forcing directories, try to brute force files by specifying file extensions.
The below tools support recursive directory and file search. Also, they might take a long time to finish depending on the used settings and wordlist.
#### dirb
**[`^ back to top ^`](#overview)**
```
dirb [http://target.com](http://target.com) /path/to/wordlist
```
```
dirb [http://target.com](http://target.com) /path/to/wordlist -X .sh,.txt,.htm,.php,.cgi,.html,.pl,.bak,.old
```
#### DirBuster
**[`^ back to top ^`](#overview)**
All DirBuster's wordlists are located at `/usr/share/dirbuster/wordlists/` directory.
#### Dirsearch
**[`^ back to top ^`](#overview)**
Let's search the directories with dirsearch.
```
dirsearch -u http://:port/ --exclude-status 403,404,400,401 -o dir
```
Let's search with file extension:
```
dirsearch -u target.com -e sh,txt,htm,php,cgi,html,pl,bak,old
```
```
dirsearch -u target.com -e sh,txt,htm,php,cgi,html,pl,bak,old -w path/to/wordlist
```
```
dirsearch -u [https://target.com](https://target.com) -e .
```
#### feroxbuster
**[`^ back to top ^`](#overview)**
Brute force directories on a web server:
```
cat subdomains_live_long.txt | feroxbuster --stdin -k -n --auto-bail --random-agent -t 50 -T 3 --json -o feroxbuster_results.txt -s 200,301,302,401,403 -w raft-small-directories-lowercase.txt
```
This tool is way faster than [DirBuster](#dirbuster).
Filter out directories from the results:
```
jq -r 'select(.status | tostring | test("^2")).url' feroxbuster_results.json | sort -uf | tee -a directories_2xx.txt
jq -r 'select(.status | tostring | test("^2|^4")).url' feroxbuster_results.json | sort -uf | tee -a directories_2xx_4xx.txt
jq -r 'select(.status | tostring | test("^3")).url' feroxbuster_results.json | sort -uf | tee -a directories_3xx.txt
jq -r 'select(.status | tostring | test("^401$")).url' feroxbuster_results.json | sort -uf | tee -a directories_401.txt
jq -r 'select(.status | tostring | test("^403$")).url' feroxbuster_results.json | sort -uf | tee -a directories_403.txt
```
| Option | Description |
| --- | --- |
| -u | The target URL (required, unless --stdin \| --resume-from is used) |
| --stdin | Read URL(s) from STDIN |
| -a/-A | Sets the User-Agent (default: feroxbuster\/x.x.x) \/ Use a random User-Agent |
| -x | File extension(s) to search for (ex: -x php -x pdf,js) |
| -m | Which HTTP request method(s) should be sent (default: GET) |
| --data | Request's body; can read data from a file if input starts with an \@(ex: \@post.bin) |
| -H | Specify HTTP headers to be used in each request (ex: -H header:val -H 'stuff:things') |
| -b | Specify HTTP cookies to be used in each request (ex: -b stuff=things) |
| -Q | Request's URL query parameters (ex: -Q token=stuff -Q secret=key) |
| -f | Append \/ to each request's URL |
| -s | Status Codes to include (allow list) (default: 200,204,301,302,307,308,401,403,405) |
| -T | Number of seconds before a client's request times out (default: 7) |
| -k | Disables TLS certificate validation for the client |
| -t | Number of concurrent threads (default: 50) |
| -n | Do not scan recursively |
| -w | Path to the wordlist |
| --auto-bail | Automatically stop scanning when an excessive amount of errors are encountered |
| -B | Automatically request likely backup extensions for "found" URLs (default: ~, .bak, .bak2, .old, .1) |
| -q | Hide progress bars and banner (good for tmux windows w/ notifications) |
| -o | Output file to write results to (use w/ --json for JSON entries) |
#### ffuf
**[`^ back to top ^`](#overview)**
Directory fuzzing:
```
ffuf -u http:///FUZZ -w /usr/share/dirb/wordlists/common.txt -mc 200,204,301,302,307
ffuf -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-medium-directories.txt -u "http:///FUZZ" -c
```
Subdomain search with ffuf:
```
ffuf -w /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-20000.txt -u "" -H "HOST: FUZZ." -c -fs 169
```
#### gobuster
**[`^ back to top ^`](#overview)**
```
gobuster -u [https://target.com](https://target.com) -w /usr/share/wordlists/dirb/big.txt
```
Let's search the directories with gobuster. In the parameters we specify the number of threads 128 (`-t`), URL (`-u`), dictionary (`-w`) and extensions we are interested in (`-x`).
```
gobuster dir -t 128 -k -u -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -x php,txt,html,sh,cgi
gobuster dir -t 50 -k -u http://:49663 -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -s '200,301' --no-error
```
Let's search the subdomains with gobuster:
```
gobuster vhost -u -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-110000.txt -k
gobuster vhost -u -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt --append-domain -t 20
```
If we see DNS server in the ports, so let's try to crawl domains:
```
gobuster dns -d -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-20000.txt -r :53
```
### Google Dorks
**[`^ back to top ^`](#overview)**
Google Dork databases:
* [exploit-db.com/google-hacking-database](https://www.exploit-db.com/google-hacking-database)
* [cxsecurity.com/dorks](https://cxsecurity.com/dorks)
* [pentest-tools.com/information-gathering/google-hacking](https://pentest-tools.com/information-gathering/google-hacking)
* [opsdisk/pagodo/blob/master/dorks/all_google_dorks.txt](https://github.com/opsdisk/pagodo/blob/master/dorks/all_google_dorks.txt)
Check the list of `/.well-known/` files [here](https://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml).
Google Dorking will not show directories nor files that are disallowed in `robots.txt`, to check for such directories and files use [httpx](#httpx).
Append `site:www.somedomain.com` to limit your scope to a specified subdomain.
Append `site:*.somedomain.com` to limit your scope to all subdomains.
Append `site:*.somedomain.com -www` to exclude `www` subdomain from the results.
Simple Google Dorks:
```
inurl:/robots.txt intext:disallow ext:txt
inurl:/.well-known/security.txt ext:txt
inurl:/info.php intext:"php version" ext:php
intitle:"index of /" intext:"parent directory"
intitle:"index of /.git" intext:"parent directory"
inurl:/gitweb.cgi
intitle:"Dashboard [Jenkins]"
(intext:"mysql database" AND intext:db_password) ext:txt
intext:-----BEGIN PGP PRIVATE KEY BLOCK----- (ext:pem OR ext:key OR ext:txt)
```
### Chad
**[`^ back to top ^`](#overview)**
Find and download files using a Google Dork:
```
mkdir chad_downloads
chad -nsos -o chad_downloads_results.json -dir chad_downloads -tr 200 -q "ext:txt OR ext:json OR ext:yml OR ext:pdf OR ext:doc OR ext:docx OR ext:xls OR ext:xlsx OR ext:zip OR ext:tar OR ext:rar OR ext:gzip OR ext:7z" -s *.somedomain.com
```
Extract authors (and more) from the files:
```
apt -y install libimage-exiftool-perl
exiftool -S chad_downloads | grep -Po '(?<=Author\:\ ).+' | sort -uf | tee -a people.txt
```
Find directory listings using a Google Dork:
```
chad -nsos chad_directory_listings_results.json -tr 200 -q 'intitle:"index of /" intext:"parent directory"' -s *.somedomain.com
```
More about project at [ivan-sincek/chad](https://github.com/ivan-sincek/chad).
### PhoneInfoga
**[`^ back to top ^`](#overview)**
Download the latest version from [GitHub](https://github.com/sundowndev/phoneinfoga/releases) and check how to [install](#0-install-tools-and-setup) the tool.
Get a phone number information:
```
phoneinfoga scan -n +1111111111
```
Get a phone number information using the web UI:
<_BLOCK_101/>
Navigate to `http://localhost:5000` with your preferred web browser.
### git-dumper
**[`^ back to top ^`](#overview)**
Try to reconstruct a GitHub repository, i.e., get the source code, based on the commit history from a public `/.git` directory:
```
# git-dumper