* No badgers were harmed in the creation of this blog *

** Not intended to diagnose, treat, cure, or prevent any disease
**

Saturday, January 21, 2023

Capture the Flag: Raven, part 2

First Post (Last Post)

Last time, I left off during Open-Source Intelligence (OSINT), having looked through most of Raven’s website, found their blog, edited my /etc/hosts file to enable their blog to properly load, and finally loaded up their blog in my web browser. There is another page to the website: Contact. This page looks to have been left unchanged from the template used in creating the site, as it includes contact info specific to Bangladesh, including two phone numbers and a physical address. There is also a Google Maps display that didn’t properly load until I had clicked on it a few times, and a form to submit data. Opening up Burp Suite, I discovered that the form submits to the site’s ip on port 80 via POST request. Allowing my submission to go through revealed that there is no obvious outcome from submitting the form.

So far, there are two obvious avenues of attack: the WordPress blog and the outdated server software. Note that Wordpress isn’t inherently unsafe. Rather, it’s often misconfigured in an unsafe manner.

There may be more to find. So far, I’ve limited my nmap scan to ports 80 and 443. Running it against other ports might yield further information and/or confirm what I’ve already found. I’m also allowing BurpSuite to crawl the site, having set guardrails on it to not scan anything outside of my target’s ip. From this, I found the image library. Here I found several flags (i.e. national flags), probably used for setting the user’s language or location. I note that the first flag is Canadian. Perhaps the site was set up in Canada? Bangladesh’s flag is number 7. There are also a series of images of children, which I didn’t find on the site itself and seem incongruous for a security company; and a disturbing image of a child clown doll. I hope it’s a doll. None of the images show any sticky notes in the background, which might contain usernames and/or passwords. There is an image of a firefighter, but their service (e.g. city) is cropped out. Though the image is likely a stock photo, it might have indicated sources of passwords. Similarly, the main background photo contains an officer. Her uniform contains chevrons, suggesting ranks like those of the military (two stripes would be a corporal). Another image appears to be from Dublin, which might be a source of passwords, including their sports teams. In truth, all of the images here are probably stock images, but in a real engagement there might be valuable information among the dross.

There is no robots.txt file. These files direct web crawlers not to explore or index particular pages. This can be done to shield portions of the site from the search engines powered by the web crawler. From an offensive standpoint, this is a bit like saying “ignore the man behind the curtain,” so evaluating the contents of a robots.txt file can be useful OSINT. Robots.txt files can also be used to alert legitimate crawlers of spider traps (see https://en.wikipedia.org/wiki/Spider_trap).

Any web page’s source is visible by right-clocking and choosing ‘view source’ (or similar) form the contextual menu that pops up. Typically, the browser menu also offers a method, and a shortcut key is [CTRL] + [U]. Looking through the source code, at first I only note some spelling errors. There are also irregularities in the capitalization of the rem lines (comments). These suggest a certain amount of sloppiness went into crafting the site, which may have carried over into its security posture.

One of the flags, it turns out, is hidden in the source code. I’ve whited out part of it – if you want the flag, you’ll have to find it yourself.

 

The source of the WordPress page contains about 150 lines of viewboxes and paths in a format I’m unfamiliar with. I’ll note it down, but given that I already have an outdated Apache Install and a WordPress site, hopefully I won’t need to figure this out. The Contact page’s source code reveals nothing of import, though I am again struck that the phone number on the top right of the page does not match the phone number in the Contact area of the page.

I can also run a more complete nmap scan:

sudo nmap 192.168.159.140 -sC -sV -O -Pn -p- -oA raven1

Here, I’ve directed nmap to limit its attention to the target ip, rather than spreading itself over my entire subnet. I’m also scanning with ‘usual’ scripts (-sC), scanning for software versions (-sV), scanning for operating system and version (-O), suppressing ping as before, scanning all ports (-p-) and directing the output to be saved to files with name “raven1”.

I find four open ports:

·         port 22, running OpenSSH 6.7p1 Debian 5+deb8u4 (protocol 2.0)

·         port 80, running Apache 2.4.10

·         port 111, running rpcbind 2-4 (RPC # 100000)

·         port 58053, status 1 (RPC #100024). This port is also mentioned in association with port 111.

Port 111 is a well-known-port for remote procedure calls (RPC). There may be something here, as there are known exploits for the RPC function.

Considering now the Apache Server, the RPC functions, and the WordPress blog as potential avenues, WordPress feels easier.

Wpscan is a scanner for WordPress blogs. For noncommercial use, it is free. Among other things, it can scan for themes and plugins (including any associated vulnerabilities), enumerate usernames, and attempt to brute-force passwords (as described in its user documentation (https://github.com/wpscanteam/wpscan/wiki/WPScan-User-Documentation), it can check for weak passwords by making brute force attacks. Exhaustive brute force attacks require other tools.) Having updated its databases (gem update wpscan) and its metadata (wpscan –update), I’m ready to launch:

wpscan --url 192.168.159.140/wordpress -eu

wpscan addresses the program.

--url tells it I’m feeding in an address to scan; it is followed by a space and then the address of the blog (in this case, the ip address followed by “/wordpress”).

-eu tells it to enumerate users.

Note that without additional flags, wpscan will run somewhat aggressively (noisily)

Perhaps a second later, wpscan returns two users, “michael” and “steven”. It also gives me the WordPress version, and notes that this version is outdated.

First Post (Last Post)

Wednesday, January 11, 2023

Capture the Flag: Raven. Part 1

Raven is a Capture the Flag (CTF) that runs as a virtual machine. The benefits of this are first, that as you hack Raven, you don’t have to worry about accidentally shutting something down – it’s a virtual machine with no other software on it; and second, you actually have to find the machine on your network, giving you some practice in network assessment.  I ran Raven on VMWare, attacking from a Kali Linux box, also on VMWare.

Finding the target:

Since I didn’t have an ip for Raven, my first step was a quick network scan.   I turned to nmap for this.  Since I knew that Raven includes a website, it seemed reasonable to scan for any machines with ports 80 (http) and/or 443 (https) open:

sudo nmap 192.168.159.0/24 -p80,443 -Pn

In this case, I didn’t need to use sudo, which runs nmap with administrator privileges, but nmap does require sudo for some of its scan types and I slipped up and added it.  No harm done.

nmap, of course, runs nmap, the network mapper, and this is followed by the CIDR range of my home network.

The p flag (option)n tells namp to only look at ports 80 and 443.  By default, nmap looks at 100 ports on each host; trimming the list to two ports sped things up.  If I had remembered whether Raven hosted its website on http or https, I could have eliminated one of those ports and sped things further, but with a small network (and no intrusion detection system (IDS)) the benefits would have been minimal.

Pn tells nmap to skip pinging hosts to see if they’re live.  In practice, hosts can be told not to respond to pings, meaning that lack of response to a ping doesn’t prove the lack of a host at that address.  A flurry of pings can also set off an IDS.  Suppressing pings causes nmap to proceed as if it successfully received a ping from every host on the network – i.e. it fully scans every address, whether there’s evidence of a host at that address or not.  Depending on the complexity of the scan and the size of the network, this can greatly add to the work nmap has to do, and thus the time it take nmap to complete, which the program warns you of when you use this flag.  Here, I had a small network and a simple scan, so the time burden was minimal.

Nmap returned five hosts. One was the ip for my attack box (the machine I was attacking from).  It is possible to tell nmap to skip the attack ip, but the extra keystrokes would have taken more time than it would have saved.  Of the other four, only one had an open port 80 or 443.  As a bonus, the MAC address is noted as VMWare, though MAC addresses can be spoofed.

Since port 80 was the open port, simply dropping that ip into my web browser of choice proved that this was indeed my target, as the Raven website popped up.

Assessing the Target:

Now that I had a website, let’s take a look at it.   Much of the site isn’t actually fleshed out – the links lead nowhere.  At the bottom of the home page, however, there is a list of blog posts which lead to a different nowhere.  Instead of leading right back to the page they start on (a # link), clicking on a blog entry yields an error:

Apache 2.4.10, huh?  Pretty outdated.  There should be some good exploits available here.

Adding this to my notebook, I kept going.  The bottom of the page contains a live link to colorlib, which provides website templates and WordPress themes  Perhaps there are known flaws in some of their products.?

Also at the bottom of the page there is a text entry field for visitors to enter their email addresses to stay abreast on Raven’s latest news.  A quick entry of a single quote yielded no peculiar results, but there could be something here on further investigation.

The About Us, Service, and Team pages yielded nothing of note, but the blog page liked out to a Wordpress blog. This page took about a minute to load, and came up as a mess.  Either someone did a really bad job coding here, or (more likely) the page isn’t loading properly.

Viewing the source code reveals the answer. There are three dns prefetches. Two look to be sites on the web, but the third (the first, actually, line 10, to raven.local) looks like an address stored on the same server as the web page.

A quick Google search tells me that dns prefetches are hints to a web browser that resources will be requested form a particular location, so making a connection with that server now will save time when the resource is actually requested.  Searching the web page’s source code shows 27 times where resources are requested from raven.local (not shown).  This looks to be the problem: the browser can’t find the resources that are stashed at raven.local.  No DNS lookup can provide this address; I need to tell the browser that these resources are located locally.

Doing so is via the /etc/hosts file.  Here, I’ve added Raven’s ip and the text ‘raven.local.’  When the computer wants to access a domain, it will first look here to resolve the domain name to its ip address; adding this line tells the computer where raven local is.  At the same time, I could add a line stating that 129.168.159.140 is “target”, so that I could substitute “target” for the ip for the rest of this engagement.

Note that /etc/hosts is rewritten every time the computer reboots, so changes here aren’t permanent.  I edited the file using nano – note that this file is write-restricted – I needed to edit with root privilege to be able to change the file (sudo nano /etc/hosts).

Saving my updated /etc/hosts file and reloading the blog generated a clean, properly loaded site, and enabled me to easily find the login page.

NOTE: In this write-up, I have included images of the Raven website. This website is copyright protected. However, I believe that the use of low-resolution images, used for the purposes of educating users about the site, is acceptable "fair use" of this material. If the copyright holder wishes me to remove these images, please comment on this post, including a method for replying.

Next Post

Why Today's FAA Snafu is a Cybersecurity Failure

 Cybersecurity, at the end of the day, is about protecting the CIA triad of confidentiality, integrity, and availability: data should be kept confidential (unauthorized persons should not have access), with integrity (it should be complete and correct) and availability (persons authorized to view and use the data should be able to do so.  Today's failure of the FAA's NOTAM (Notice to Air Missions) safety system is a clear example of a failure of availability, and the chaos that can result following that type of failure.  Without the data provided by the system, thousands of flights were grounded.  The cause now appears to be a corrupted file,(1, 2) meaning that integrity, another pillar of the triad, led to the loss of accessibility.

Though data breaches, hacking, and pen testing tend to be the first things thought of when cybersecurity is discussed, the more pedestrian functions of keeping the data safe from accidents is just as important.

1. Blackman, Jay, et. al. "Corrupted file to blame for FAA aviation stoppage that delayed thousands of flights" NBC News, Updated Jan. 11, 2023, 6:47 PM EST.  Accessed at https://www.nbcnews.com/news/us-news/us-flights-grounded-faa-outage-rcna65243 Jan 11, 2023 6:56 PM EST

2. Federal Aviation Administration, "FAA NOTAMS Statement: Wednesday, January 11, 2023", Accessed at https://www.faa.gov/newsroom/faa-notams-statement Jan 11, 2023, 7:00 PM EST