Import upstream version 0.6
Kali Janitor
3 years ago
3 | 3 | Currently enumerates the following: |
4 | 4 | |
5 | 5 | **Amazon Web Services**: |
6 | - Open S3 Buckets | |
7 | - Protected S3 Buckets | |
6 | - Open / Protected S3 Buckets | |
7 | - awsapps (WorkMail, WorkDocs, Connect, etc.) | |
8 | 8 | |
9 | 9 | **Microsoft Azure**: |
10 | 10 | - Storage Accounts |
14 | 14 | - Web Apps |
15 | 15 | |
16 | 16 | **Google Cloud Platform** |
17 | - Open GCP Buckets | |
18 | - Protected GCP Buckets | |
17 | - Open / Protected GCP Buckets | |
18 | - Open / Protected Firebase Realtime Databases | |
19 | 19 | - Google App Engine sites |
20 | - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) | |
20 | 21 | |
21 | By "open" buckets/containers, I mean those that allow anonymous users to list contents. if you discover a protected bucket/container, it is still worth trying to brute force the contents with another tool. | |
22 | ||
23 | **IMPORTANT**: Azure Virtual Machine DNS records can span a lot of geo regions. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py. You'll want to look at this file and edit it to be relevant to your own work. | |
22 | See it in action in [Codingo](https://github.com/codingo)'s video demo [here](https://www.youtube.com/embed/pTUDJhWJ1m0). | |
24 | 23 | |
25 | 24 | <img src="https://initstring.keybase.pub/host/images/cloud_enum.png" align="center"/> |
26 | 25 | |
28 | 27 | # Usage |
29 | 28 | |
30 | 29 | ## Setup |
31 | You'll need the `requests-futures` python package, as this tool uses it for multi-threading HTTP requests. It's a very cool package if you're already using `requests`, I highly recommend it. | |
30 | Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows: | |
32 | 31 | |
33 | 32 | ```sh |
34 | 33 | pip3 install -r ./requirements.txt |
39 | 38 | |
40 | 39 | You can provide multiple keywords by specifying the `-k` argument multiple times. |
41 | 40 | |
42 | Azure Containers required two levels of brute-forcing, both handled automatically by this tool. First, by finding valid accounts (DNS). Then, by brute-forcing container names inside that account (HTTP scraping). The tool uses the same fuzzing file for both by default, but you can specificy individual files separately if you'd like. | |
41 | Keywords are mutated automatically using strings from `enum_tools/fuzz.txt` or a file you provide with the `-m` flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use `fuzz.txt` by default or a file you provide with the `-b` flag. | |
43 | 42 | |
44 | 43 | Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this: |
45 | 44 | |
47 | 46 | cloudenum.py -k somecompany -k somecompany.io -k blockchaindoohickey |
48 | 47 | ``` |
49 | 48 | |
50 | DNS brute-forcing uses a hard-coded 25 threads, leveraging subprocess and the Linux `host` command. | |
51 | ||
52 | HTTP scraping uses 5 threads by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10. | |
49 | HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10. | |
53 | 50 | |
54 | 51 | ```sh |
55 | 52 | cloudenum.py -k keyword -t 10 |
56 | 53 | ``` |
54 | ||
55 | **IMPORTANT**: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in `cloudenum/azure_regions.py and cloudenum/gcp_regions.py` that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work. | |
57 | 56 | |
58 | 57 | **Complete Usage Details** |
59 | 58 | ``` |
68 | 67 | -kf KEYFILE, --keyfile KEYFILE |
69 | 68 | Input file with a single keyword per line. |
70 | 69 | -m MUTATIONS, --mutations MUTATIONS |
71 | Mutations. Default: cloud_enum/mutations.txt. | |
70 | Mutations. Default: enum_tools/fuzz.txt | |
72 | 71 | -b BRUTE, --brute BRUTE |
73 | 72 | List to brute-force Azure container names. Default: |
74 | cloud_enum/brute.txt. | |
73 | enum_tools/fuzz.txt | |
75 | 74 | -t THREADS, --threads THREADS |
76 | 75 | Threads for HTTP brute-force. Default = 5 |
77 | 76 | -ns NAMESERVER, --nameserver NAMESERVER |
81 | 80 | --disable-aws Disable Amazon checks. |
82 | 81 | --disable-azure Disable Azure checks. |
83 | 82 | --disable-gcp Disable Google checks. |
83 | -qs, --quickscan Disable all mutations and second-level scans | |
84 | ||
84 | 85 | ``` |
85 | 86 | |
86 | 87 | # Thanks |
77 | 77 | |
78 | 78 | parser.add_argument('--disable-gcp', action='store_true', |
79 | 79 | help='Disable Google checks.') |
80 | ||
81 | parser.add_argument('-qs', '--quickscan', action='store_true', | |
82 | help='Disable all mutations and second-level scans') | |
80 | 83 | |
81 | 84 | args = parser.parse_args() |
82 | 85 | |
127 | 130 | Print a short pre-run status message |
128 | 131 | """ |
129 | 132 | print("Keywords: {}".format(', '.join(args.keyword))) |
130 | print("Mutations: {}".format(args.mutations)) | |
133 | if args.quickscan: | |
134 | print("Mutations: NONE! (Using quickscan)") | |
135 | else: | |
136 | print("Mutations: {}".format(args.mutations)) | |
131 | 137 | print("Brute-list: {}".format(args.brute)) |
132 | 138 | print("") |
139 | ||
140 | def check_windows(): | |
141 | """ | |
142 | Fixes pretty color printing for Windows users. Keeping out of | |
143 | requirements.txt to avoid the library requirement for most users. | |
144 | """ | |
145 | if os.name == 'nt': | |
146 | try: | |
147 | import colorama | |
148 | colorama.init() | |
149 | except ModuleNotFoundError: | |
150 | print("[!] Yo, Windows user - if you want pretty colors, you can" | |
151 | " install the colorama python package.") | |
133 | 152 | |
134 | 153 | def read_mutations(mutations_file): |
135 | 154 | """ |
192 | 211 | # Generate a basic status on targets and parameters |
193 | 212 | print_status(args) |
194 | 213 | |
195 | # First, build a sort base list of target names | |
196 | mutations = read_mutations(args.mutations) | |
214 | # Give our Windows friends a chance at pretty colors | |
215 | check_windows() | |
216 | ||
217 | # First, build a sorted base list of target names | |
218 | if args.quickscan: | |
219 | mutations = [] | |
220 | else: | |
221 | mutations = read_mutations(args.mutations) | |
197 | 222 | names = build_names(args.keyword, mutations) |
198 | 223 | |
199 | 224 | # All the work is done in the individual modules |
12 | 12 | |
13 | 13 | # Known S3 domain names |
14 | 14 | S3_URL = 's3.amazonaws.com' |
15 | APPS_URL = 'awsapps.com' | |
15 | 16 | |
16 | 17 | # Known AWS region names. This global will be used unless the user passes |
17 | 18 | # in a specific region name. (NOT YET IMPLEMENTED) |
86 | 87 | # Stop the time |
87 | 88 | utils.stop_timer(start_time) |
88 | 89 | |
90 | def check_awsapps(names, threads, nameserver): | |
91 | """ | |
92 | Checks for existence of AWS Apps | |
93 | (ie. WorkDocs, WorkMail, Connect, etc.) | |
94 | """ | |
95 | print("[+] Checking for AWS Apps") | |
96 | ||
97 | # Start a counter to report on elapsed time | |
98 | start_time = utils.start_timer() | |
99 | ||
100 | # Initialize the list of domain names to look up | |
101 | candidates = [] | |
102 | ||
103 | # Initialize the list of valid hostnames | |
104 | valid_names = [] | |
105 | ||
106 | # Take each mutated keyword craft a domain name to lookup. | |
107 | for name in names: | |
108 | candidates.append('{}.{}'.format(name, APPS_URL)) | |
109 | ||
110 | # AWS Apps use DNS sub-domains. First, see which are valid. | |
111 | valid_names = utils.fast_dns_lookup(candidates, nameserver, | |
112 | threads=threads) | |
113 | ||
114 | for name in valid_names: | |
115 | utils.printc(" App Found: https://{}\n" .format(name), 'orange') | |
116 | ||
117 | # Stop the timer | |
118 | utils.stop_timer(start_time) | |
119 | ||
89 | 120 | def run_all(names, args): |
90 | 121 | """ |
91 | 122 | Function is called by main program |
96 | 127 | #if not regions: |
97 | 128 | # regions = AWS_REGIONS |
98 | 129 | check_s3_buckets(names, args.threads) |
99 | return '' | |
130 | check_awsapps(names, args.threads, args.nameserver) |
73 | 73 | candidates.append('{}.{}'.format(name, BLOB_URL)) |
74 | 74 | |
75 | 75 | # Azure Storage Accounts use DNS sub-domains. First, see which are valid. |
76 | valid_names = utils.fast_dns_lookup(candidates, nameserver) | |
76 | valid_names = utils.fast_dns_lookup(candidates, nameserver, | |
77 | threads=threads) | |
77 | 78 | |
78 | 79 | # Send the valid names to the batch HTTP processor |
79 | 80 | utils.get_url_batch(valid_names, use_ssl=False, |
100 | 101 | |
101 | 102 | # Stop brute forcing accounts without permission |
102 | 103 | if ('not authorized to perform this operation' in reply.reason or |
103 | 'not have sufficient permissions' in reply.reason): | |
104 | print(" [!] Breaking out early, auth errors.") | |
104 | 'not have sufficient permissions' in reply.reason or | |
105 | 'Public access is not permitted' in reply.reason or | |
106 | 'Server failed to authenticate the request' in reply.reason): | |
107 | print(" [!] Breaking out early, auth required.") | |
105 | 108 | return 'breakout' |
106 | 109 | |
107 | 110 | # Stop brute forcing unsupported accounts |
149 | 152 | valid_accounts.append(account) |
150 | 153 | |
151 | 154 | # Read the brute force file into memory |
152 | with open(brute_list, encoding="utf8", errors="ignore") as infile: | |
153 | names = infile.read().splitlines() | |
154 | ||
155 | # Clean up the names to usable for containers | |
156 | banned_chars = re.compile('[^a-z0-9-]') | |
157 | clean_names = [] | |
158 | for name in names: | |
159 | name = name.lower() | |
160 | name = banned_chars.sub('', name) | |
161 | if 63 >= len(name) >= 3: | |
162 | if name not in clean_names: | |
163 | clean_names.append(name) | |
155 | clean_names = utils.get_brute(brute_list, mini=3) | |
164 | 156 | |
165 | 157 | # Start a counter to report on elapsed time |
166 | 158 | start_time = utils.start_timer() |
195 | 187 | utils.printc(" Registered Azure Website DNS Name: {}\n" |
196 | 188 | .format(hostname), 'green') |
197 | 189 | |
198 | def check_azure_websites(names, nameserver): | |
190 | def check_azure_websites(names, nameserver, threads): | |
199 | 191 | """ |
200 | 192 | Checks for Azure Websites (PaaS) |
201 | 193 | """ |
209 | 201 | |
210 | 202 | # Azure Websites use DNS sub-domains. If it resolves, it is registered. |
211 | 203 | utils.fast_dns_lookup(candidates, nameserver, |
212 | callback=print_website_response) | |
204 | callback=print_website_response, | |
205 | threads=threads) | |
213 | 206 | |
214 | 207 | # Stop the timer |
215 | 208 | utils.stop_timer(start_time) |
222 | 215 | utils.printc(" Registered Azure Database DNS Name: {}\n" |
223 | 216 | .format(hostname), 'green') |
224 | 217 | |
225 | def check_azure_databases(names, nameserver): | |
218 | def check_azure_databases(names, nameserver, threads): | |
226 | 219 | """ |
227 | 220 | Checks for Azure Databases |
228 | 221 | """ |
236 | 229 | |
237 | 230 | # Azure databases use DNS sub-domains. If it resolves, it is registered. |
238 | 231 | utils.fast_dns_lookup(candidates, nameserver, |
239 | callback=print_database_response) | |
232 | callback=print_database_response, | |
233 | threads=threads) | |
240 | 234 | |
241 | 235 | # Stop the timer |
242 | 236 | utils.stop_timer(start_time) |
249 | 243 | utils.printc(" Registered Azure Virtual Machine DNS Name: {}\n" |
250 | 244 | .format(hostname), 'green') |
251 | 245 | |
252 | def check_azure_vms(names, nameserver): | |
246 | def check_azure_vms(names, nameserver, threads): | |
253 | 247 | """ |
254 | 248 | Checks for Azure Virtual Machines |
255 | 249 | """ |
271 | 265 | |
272 | 266 | # Azure VMs use DNS sub-domains. If it resolves, it is registered. |
273 | 267 | utils.fast_dns_lookup(candidates, nameserver, |
274 | callback=print_vm_response) | |
268 | callback=print_vm_response, | |
269 | threads=threads) | |
275 | 270 | |
276 | 271 | # Stop the timer |
277 | 272 | utils.stop_timer(start_time) |
284 | 279 | |
285 | 280 | valid_accounts = check_storage_accounts(names, args.threads, |
286 | 281 | args.nameserver) |
287 | if valid_accounts: | |
282 | if valid_accounts and not args.quickscan: | |
288 | 283 | brute_force_containers(valid_accounts, args.brute, args.threads) |
289 | 284 | |
290 | check_azure_websites(names, args.nameserver) | |
291 | check_azure_databases(names, args.nameserver) | |
292 | check_azure_vms(names, args.nameserver) | |
285 | check_azure_websites(names, args.nameserver, args.threads) | |
286 | check_azure_databases(names, args.nameserver, args.threads) | |
287 | check_azure_vms(names, args.nameserver, args.threads) |
12 | 12 | 2017 |
13 | 13 | 2018 |
14 | 14 | 2019 |
15 | 2020 | |
15 | 16 | 3 |
16 | 17 | 4 |
17 | 18 | 5 |
26 | 27 | amazon |
27 | 28 | analytics |
28 | 29 | android |
30 | api | |
29 | 31 | app |
30 | 32 | appengine |
31 | 33 | appspot |
68 | 70 | contact |
69 | 71 | container |
70 | 72 | content |
73 | core | |
71 | 74 | corp |
72 | 75 | corporate |
73 | 76 | data |
101 | 104 | files |
102 | 105 | fileshare |
103 | 106 | filestore |
107 | firebase | |
104 | 108 | firestore |
105 | 109 | functions |
110 | gateway | |
106 | 111 | gcp |
107 | 112 | gcp-logs |
108 | 113 | gcplogs |
113 | 118 | graphite |
114 | 119 | graphql |
115 | 120 | gs |
121 | gw | |
116 | 122 | help |
123 | iaas | |
117 | 124 | hub |
118 | 125 | iam |
119 | 126 | images |
125 | 132 | iot |
126 | 133 | jira |
127 | 134 | js |
135 | k8s | |
128 | 136 | kube |
129 | 137 | kubeengine |
130 | 138 | kubernetes |
150 | 158 | oracle |
151 | 159 | org |
152 | 160 | packages |
161 | paas | |
153 | 162 | passwords |
154 | 163 | photos |
155 | 164 | pics |
173 | 182 | repo |
174 | 183 | reports |
175 | 184 | resources |
185 | rtdb | |
176 | 186 | s3 |
187 | saas | |
177 | 188 | screenshots |
178 | 189 | scripts |
179 | 190 | sec |
204 | 215 | subversion |
205 | 216 | support |
206 | 217 | svn |
218 | svc | |
207 | 219 | syslog |
208 | 220 | tasks |
209 | 221 | teamcity |
222 | 234 | users |
223 | 235 | ux |
224 | 236 | videos |
237 | vm | |
225 | 238 | web |
226 | 239 | website |
227 | 240 | wp |
3 | 3 | """ |
4 | 4 | |
5 | 5 | from enum_tools import utils |
6 | from enum_tools import gcp_regions | |
6 | 7 | |
7 | 8 | BANNER = ''' |
8 | 9 | ++++++++++++++++++++++++++ |
10 | 11 | ++++++++++++++++++++++++++ |
11 | 12 | ''' |
12 | 13 | |
13 | # Known S3 domain names | |
14 | # Known GCP domain names | |
14 | 15 | GCP_URL = 'storage.googleapis.com' |
16 | FBRTDB_URL = 'firebaseio.com' | |
15 | 17 | APPSPOT_URL = 'appspot.com' |
18 | FUNC_URL = 'cloudfunctions.net' | |
19 | ||
20 | # Hacky, I know. Used to store project/region combos that report at least | |
21 | # one cloud function, to brute force later on | |
22 | HAS_FUNCS = [] | |
16 | 23 | |
17 | 24 | def print_bucket_response(reply): |
18 | 25 | """ |
59 | 66 | # Stop the time |
60 | 67 | utils.stop_timer(start_time) |
61 | 68 | |
69 | def print_fbrtdb_response(reply): | |
70 | """ | |
71 | Parses the HTTP reply of a brute-force attempt | |
72 | ||
73 | This function is passed into the class object so we can view results | |
74 | in real-time. | |
75 | """ | |
76 | if reply.status_code == 404: | |
77 | pass | |
78 | elif reply.status_code == 200: | |
79 | utils.printc(" OPEN GOOGLE FIREBASE RTDB: {}\n" | |
80 | .format(reply.url), 'green') | |
81 | elif reply.status_code == 401: | |
82 | utils.printc(" Protected Google Firebase RTDB: {}\n" | |
83 | .format(reply.url), 'orange') | |
84 | elif reply.status_code == 402: | |
85 | utils.printc(" Payment required on Google Firebase RTDB: {}\n" | |
86 | .format(reply.url), 'orange') | |
87 | else: | |
88 | print(" Unknown status codes being received from {}:\n" | |
89 | " {}: {}" | |
90 | .format(reply.url, reply.status_code, reply.reason)) | |
91 | ||
92 | def check_fbrtdb(names, threads): | |
93 | """ | |
94 | Checks for Google Firebase RTDB | |
95 | """ | |
96 | print("[+] Checking for Google Firebase Realtime Databases") | |
97 | ||
98 | # Start a counter to report on elapsed time | |
99 | start_time = utils.start_timer() | |
100 | ||
101 | # Initialize the list of correctly formatted urls | |
102 | candidates = [] | |
103 | ||
104 | # Take each mutated keyword craft a url with the correct format | |
105 | for name in names: | |
106 | # Firebase RTDB names cannot include a period. We'll exlcude | |
107 | # those from the global candidates list | |
108 | if '.' not in name: | |
109 | candidates.append('{}.{}/.json'.format(name, FBRTDB_URL)) | |
110 | ||
111 | # Send the valid names to the batch HTTP processor | |
112 | utils.get_url_batch(candidates, use_ssl=True, | |
113 | callback=print_fbrtdb_response, | |
114 | threads=threads, | |
115 | redir=False) | |
116 | ||
117 | # Stop the time | |
118 | utils.stop_timer(start_time) | |
119 | ||
62 | 120 | def print_appspot_response(reply): |
63 | 121 | """ |
64 | 122 | Parses the HTTP reply of a brute-force attempt |
68 | 126 | """ |
69 | 127 | if reply.status_code == 404: |
70 | 128 | pass |
71 | elif reply.status_code == 500 or reply.status_code == 503: | |
129 | elif str(reply.status_code)[0] == 5: | |
72 | 130 | utils.printc(" Google App Engine app with a 50x error: {}\n" |
73 | 131 | .format(reply.url), 'orange') |
74 | elif reply.status_code == 200 or reply.status_code == 302: | |
132 | elif (reply.status_code == 200 | |
133 | or reply.status_code == 302 | |
134 | or reply.status_code == 404): | |
75 | 135 | utils.printc(" Google App Engine app: {}\n" |
76 | 136 | .format(reply.url), 'green') |
77 | 137 | else: |
106 | 166 | # Stop the time |
107 | 167 | utils.stop_timer(start_time) |
108 | 168 | |
169 | def print_functions_response1(reply): | |
170 | """ | |
171 | Parses the HTTP reply the initial Cloud Functions check | |
172 | ||
173 | This function is passed into the class object so we can view results | |
174 | in real-time. | |
175 | """ | |
176 | if reply.status_code == 404: | |
177 | pass | |
178 | elif reply.status_code == 302: | |
179 | utils.printc(" Contains at least 1 Cloud Function: {}\n" | |
180 | .format(reply.url), 'green') | |
181 | HAS_FUNCS.append(reply.url) | |
182 | else: | |
183 | print(" Unknown status codes being received from {}:\n" | |
184 | " {}: {}" | |
185 | .format(reply.url, reply.status_code, reply.reason)) | |
186 | ||
187 | def print_functions_response2(reply): | |
188 | """ | |
189 | Parses the HTTP reply from the secondary, brute-force Cloud Functions check | |
190 | ||
191 | This function is passed into the class object so we can view results | |
192 | in real-time. | |
193 | """ | |
194 | if 'accounts.google.com/ServiceLogin' in reply.url: | |
195 | pass | |
196 | elif reply.status_code == 403 or reply.status_code == 401: | |
197 | utils.printc(" Auth required Cloud Function: {}\n" | |
198 | .format(reply.url), 'orange') | |
199 | elif reply.status_code == 405: | |
200 | utils.printc(" UNAUTHENTICATED Cloud Function (POST-Only): {}\n" | |
201 | .format(reply.url), 'green') | |
202 | elif reply.status_code == 200 or reply.status_code == 404: | |
203 | utils.printc(" UNAUTHENTICATED Cloud Function (GET-OK): {}\n" | |
204 | .format(reply.url), 'green') | |
205 | else: | |
206 | print(" Unknown status codes being received from {}:\n" | |
207 | " {}: {}" | |
208 | .format(reply.url, reply.status_code, reply.reason)) | |
209 | ||
210 | def check_functions(names, brute_list, quickscan, threads): | |
211 | """ | |
212 | Checks for Google Cloud Functions running on cloudfunctions.net | |
213 | ||
214 | This is a two-part process. First, we want to find region/project combos | |
215 | that have existing Cloud Functions. The URL for a function looks like this: | |
216 | https://[ZONE]-[PROJECT-ID].cloudfunctions.net/[FUNCTION-NAME] | |
217 | ||
218 | We look for a 302 in [ZONE]-[PROJECT-ID].cloudfunctions.net. That means | |
219 | there are some functions defined in that region. Then, we brute force a list | |
220 | of possible function names there. | |
221 | ||
222 | See gcp_regions.py to define which regions to check. The tool currently | |
223 | defaults to only 1 region, so you should really modify it for best results. | |
224 | """ | |
225 | print("[+] Checking for project/zones with Google Cloud Functions.") | |
226 | ||
227 | # Start a counter to report on elapsed time | |
228 | start_time = utils.start_timer() | |
229 | ||
230 | # Pull the regions from a config file | |
231 | regions = gcp_regions.REGIONS | |
232 | ||
233 | print("[*] Testing across {} regions defined in the config file" | |
234 | .format(len(regions))) | |
235 | ||
236 | for region in regions: | |
237 | # Initialize the list of initial URLs to check | |
238 | candidates = [region + '-' + name + '.' + FUNC_URL for name in names] | |
239 | ||
240 | # Send the valid names to the batch HTTP processor | |
241 | utils.get_url_batch(candidates, use_ssl=False, | |
242 | callback=print_functions_response1, | |
243 | threads=threads, | |
244 | redir=False) | |
245 | ||
246 | # Retun from function if we have not found any valid combos | |
247 | if not HAS_FUNCS: | |
248 | utils.stop_timer(start_time) | |
249 | return | |
250 | ||
251 | # Also bail out if doing a quick scan | |
252 | if quickscan: | |
253 | return | |
254 | ||
255 | # If we did find something, we'll use the brute list. This will allow people | |
256 | # to provide a separate fuzzing list if they choose. | |
257 | print("[*] Brute-forcing function names in {} project/region combos" | |
258 | .format(len(HAS_FUNCS))) | |
259 | ||
260 | # Load brute list in memory, based on allowed chars/etc | |
261 | brute_strings = utils.get_brute(brute_list) | |
262 | ||
263 | # The global was built in a previous function. We only want to brute force | |
264 | # project/region combos that we know have existing functions defined | |
265 | for func in HAS_FUNCS: | |
266 | print("[*] Brute-forcing {} function names in {}" | |
267 | .format(len(brute_strings), func)) | |
268 | # Initialize the list of initial URLs to check. Strip out the HTTP | |
269 | # protocol first, as that is handled in the utility | |
270 | func = func.replace("http://", "") | |
271 | ||
272 | # Noticed weird behaviour with functions when a slash is not appended. | |
273 | # Works for some, but not others. However, appending a slash seems to | |
274 | # get consistent results. Might need further validation. | |
275 | candidates = [func + brute + '/' for brute in brute_strings] | |
276 | ||
277 | # Send the valid names to the batch HTTP processor | |
278 | utils.get_url_batch(candidates, use_ssl=False, | |
279 | callback=print_functions_response2, | |
280 | threads=threads) | |
281 | ||
282 | # Stop the time | |
283 | utils.stop_timer(start_time) | |
284 | ||
109 | 285 | def run_all(names, args): |
110 | 286 | """ |
111 | 287 | Function is called by main program |
113 | 289 | print(BANNER) |
114 | 290 | |
115 | 291 | check_gcp_buckets(names, args.threads) |
292 | check_fbrtdb(names, args.threads) | |
116 | 293 | check_appspot(names, args.threads) |
117 | return '' | |
294 | check_functions(names, args.brute, args.quickscan, args.threads) |
0 | """ | |
1 | File used to track the DNS regions for GCP resources. | |
2 | """ | |
3 | ||
4 | # Some enumeration tasks will need to go through the complete list of | |
5 | # possible DNS names for each region. You may want to modify this file to | |
6 | # use the regions meaningful to you. | |
7 | # | |
8 | # Whatever is listed in the last instance of 'REGIONS' below is what the tool | |
9 | # will use. | |
10 | ||
11 | ||
12 | # Here is the list I get when running `gcloud functions regions list` | |
13 | REGIONS = ['us-central1', 'us-east1', 'us-east4', 'us-west2', 'us-west3', | |
14 | 'us-west4', 'europe-west1', 'europe-west2', 'europe-west3', | |
15 | 'europe-west6', 'asia-east2', 'asia-northeast1', 'asia-northeast2', | |
16 | 'asia-northeast3', 'asia-south1', 'asia-southeast2', | |
17 | 'northamerica-northeast1', 'southamerica-east1', | |
18 | 'australia-southeast1'] | |
19 | ||
20 | ||
21 | # And here I am limiting the search by overwriting this variable: | |
22 | REGIONS = ['us-central1',] |
3 | 3 | |
4 | 4 | import time |
5 | 5 | import sys |
6 | import subprocess | |
7 | 6 | import datetime |
8 | 7 | import re |
9 | import requests | |
8 | from multiprocessing.dummy import Pool as ThreadPool | |
9 | from functools import partial | |
10 | 10 | try: |
11 | import requests | |
12 | import dns | |
13 | import dns.resolver | |
11 | 14 | from concurrent.futures import ThreadPoolExecutor |
12 | 15 | from requests_futures.sessions import FuturesSession |
13 | 16 | from concurrent.futures._base import TimeoutError |
14 | 17 | except ImportError: |
15 | print("[!] You'll need to pip install requests_futures for this tool.") | |
18 | print("[!] Please pip install requirements.txt.") | |
16 | 19 | sys.exit() |
17 | 20 | |
18 | 21 | LOGFILE = False |
30 | 33 | log_writer.write("\n\n#### CLOUD_ENUM {} ####\n" |
31 | 34 | .format(now)) |
32 | 35 | |
33 | def get_url_batch(url_list, use_ssl=False, callback='', threads=5): | |
36 | def get_url_batch(url_list, use_ssl=False, callback='', threads=5, redir=True): | |
34 | 37 | """ |
35 | 38 | Processes a list of URLs, sending the results back to the calling |
36 | 39 | function in real-time via the `callback` parameter |
50 | 53 | else: |
51 | 54 | proto = 'http://' |
52 | 55 | |
53 | # Start a requests object | |
54 | session = FuturesSession(executor=ThreadPoolExecutor(max_workers=threads)) | |
55 | ||
56 | 56 | # Using the async requests-futures module, work in batches based on |
57 | 57 | # the 'queue' list created above. Call each URL, sending the results |
58 | 58 | # back to the callback function. |
59 | 59 | for batch in queue: |
60 | # I used to initialize the session object outside of this loop, BUT | |
61 | # there were a lot of errors that looked related to pool cleanup not | |
62 | # happening. Putting it in here fixes the issue. | |
63 | # There is an unresolved discussion here: | |
64 | # https://github.com/ross/requests-futures/issues/20 | |
65 | session = FuturesSession(executor=ThreadPoolExecutor(max_workers=threads+5)) | |
60 | 66 | batch_pending = {} |
61 | 67 | batch_results = {} |
62 | 68 | |
63 | 69 | # First, grab the pending async request and store it in a dict |
64 | 70 | for url in batch: |
65 | batch_pending[url] = session.get(proto + url) | |
71 | batch_pending[url] = session.get(proto + url, allow_redirects=redir) | |
66 | 72 | |
67 | 73 | # Then, grab all the results from the queue. |
68 | 74 | # This is where we need to catch exceptions that occur with large |
72 | 78 | # Timeout is set due to observation of some large jobs simply |
73 | 79 | # hanging forever with no exception raised. |
74 | 80 | batch_results[url] = batch_pending[url].result(timeout=30) |
75 | except requests.exceptions.ConnectionError: | |
76 | print(" [!] Connection error on {}. Investigate if there" | |
77 | " are many of these.".format(url)) | |
81 | except requests.exceptions.ConnectionError as error_msg: | |
82 | print(" [!] Connection error on {}:".format(url)) | |
83 | print(error_msg) | |
78 | 84 | except TimeoutError: |
79 | 85 | print(" [!] Timeout on {}. Investigate if there are" |
80 | 86 | " many of these".format(url)) |
97 | 103 | # Clear the status message |
98 | 104 | sys.stdout.write(' \r') |
99 | 105 | |
100 | def fast_dns_lookup(names, nameserver, callback='', threads=25): | |
101 | """ | |
102 | Helper function to resolve DNS names. Uses subprocess for threading. | |
106 | def dns_lookup(nameserver, name): | |
107 | """ | |
108 | This function performs the actual DNS lookup when called in a threadpool | |
109 | by the fast_dns_lookup function. | |
110 | """ | |
111 | res = dns.resolver.Resolver() | |
112 | res.timeout = 10 | |
113 | res.nameservers = [nameserver] | |
114 | ||
115 | try: | |
116 | res.query(name) | |
117 | # If no exception is thrown, return the valid name | |
118 | return name | |
119 | except dns.resolver.NXDOMAIN: | |
120 | return '' | |
121 | except dns.exception.Timeout: | |
122 | print(" [!] DNS Timeout on {}. Investigate if there are many" | |
123 | " of these.".format(name)) | |
124 | ||
125 | def fast_dns_lookup(names, nameserver, callback='', threads=5): | |
126 | """ | |
127 | Helper function to resolve DNS names. Uses multithreading. | |
103 | 128 | """ |
104 | 129 | total = len(names) |
105 | 130 | current = 0 |
110 | 135 | # Break the url list into smaller lists based on thread size |
111 | 136 | queue = [names[x:x+threads] for x in range(0, len(names), threads)] |
112 | 137 | |
113 | # Work through the smaller lists in batches. Using Python's subprocess | |
114 | # module, those host OS will execute the `host` command. Python will | |
115 | # move on to the next and then check the output of the OS command when | |
116 | # finished queueing the batch. A status code of 0 means the host lookup | |
117 | # succeeded. | |
118 | 138 | for batch in queue: |
119 | batch_pending = {} | |
120 | batch_results = {} | |
121 | ||
122 | # First, grab the pending async request and store it in a dict | |
123 | for name in batch: | |
124 | # Build the OS command to lookup a DNS name | |
125 | cmd = ['host', '{}'.format(name), '{}'.format(nameserver)] | |
126 | ||
127 | # Run the command and store the pending output | |
128 | batch_pending[name] = subprocess.Popen(cmd, | |
129 | stdout=subprocess.DEVNULL, | |
130 | stderr=subprocess.DEVNULL) | |
131 | ||
132 | # Then, grab all the results from the queue | |
133 | for name in batch_pending: | |
134 | batch_pending[name].wait() | |
135 | batch_results[name] = batch_pending[name].poll() | |
136 | ||
137 | # If we get a 0, save it as a valid DNS name and send to callback | |
138 | # if defined. | |
139 | if batch_results[name] == 0: | |
140 | valid_names.append(name) | |
139 | pool = ThreadPool(threads) | |
140 | ||
141 | # Because pool.map takes only a single function arg, we need to | |
142 | # define this partial so that each iteration uses the same ns | |
143 | dns_lookup_params = partial(dns_lookup, nameserver) | |
144 | ||
145 | results = pool.map(dns_lookup_params, batch) | |
146 | ||
147 | # We should now have the batch of results back, process them. | |
148 | for name in results: | |
149 | if name: | |
141 | 150 | if callback: |
142 | 151 | callback(name) |
143 | ||
144 | # Refresh a status message | |
152 | valid_names.append(name) | |
153 | ||
145 | 154 | current += threads |
155 | ||
156 | # Update the status message | |
146 | 157 | sys.stdout.flush() |
147 | 158 | sys.stdout.write(" {}/{} complete...".format(current, total)) |
148 | 159 | sys.stdout.write('\r') |
160 | pool.close() | |
149 | 161 | |
150 | 162 | # Clear the status message |
151 | 163 | sys.stdout.write(' \r') |
152 | 164 | |
153 | # Return the list of valid dns names | |
154 | 165 | return valid_names |
155 | 166 | |
156 | 167 | def list_bucket_contents(bucket): |
203 | 214 | with open(LOGFILE, 'a') as log_writer: |
204 | 215 | log_writer.write(text.lstrip()) |
205 | 216 | |
217 | def get_brute(brute_file, mini=1, maxi=63, banned='[^a-z0-9_-]'): | |
218 | """ | |
219 | Generates a list of brute-force words based on length and allowed chars | |
220 | """ | |
221 | # Read the brute force file into memory | |
222 | with open(brute_file, encoding="utf8", errors="ignore") as infile: | |
223 | names = infile.read().splitlines() | |
224 | ||
225 | # Clean up the names to usable for containers | |
226 | banned_chars = re.compile(banned) | |
227 | clean_names = [] | |
228 | for name in names: | |
229 | name = name.lower() | |
230 | name = banned_chars.sub('', name) | |
231 | if maxi >= len(name) >= mini: | |
232 | if name not in clean_names: | |
233 | clean_names.append(name) | |
234 | ||
235 | return clean_names | |
236 | ||
206 | 237 | def start_timer(): |
207 | 238 | """ |
208 | 239 | Starts a timer for functions in main module |