By- February 15, 2018
What is property testing? In short, it can be described as a method of testing output of a program against the expected behavior, or properties, of a piece of code. Why should you care? The same reason we here at Tinfoil Security care: good testing goes beyond ensuring your code is functional. It can be crucial line of defense when it comes to the security of your applications, and property testing is a uniquely powerful tool in accomplishing these ends. But before we dive deeper, lets review more traditional testing.
it "finds the biggest element in the list" do assert 5 = biggest() assert 6 = biggest([6, 5]) assert 100 = 100..1 |> Enum.to_list() |> biggest() end
The test above is fairly straightforward. It attempts to check that the
biggest function will in fact return the biggest element of a provided list. It falls short in a few noticeable ways: What if the list is empty? What if it isn't sorted? What if there are duplicate integers?
Traditional testing very often focuses on specific examples and is dependent on the assumptions of the programmer. The function above may have been created with only positive integers in mind and it may not occur to the writer to test for cases involving negatives.
This example is a simple one, but it demonstrates a major drawback of traditional testing: it reveals the presence of expected bugs, rather than the absence of unexpected bugs. How would we pursue the latter? Enter property testing.
What is a Property?
Property testing is reliant on describing the properties of a function or piece of code. Very simply, these properties are general rules on how a program should behave. For the example function above, we might define the following:
biggestreturns the largest element of a list."
We might describe the properties of other well-known algorithms as such:
sortreturns a list with every element in ascending order."
appendreturns a list with a length equal to the sum of the lengths of both lists passed to it."
appendreturns a list with every element of of the first list, followed by every element of the second list."
Once we have defined the general properties of a program we can move beyond specific examples. From there we can use generators to test the output of our code against these properties using a variety of generated inputs.
How to Describe a Property?
This is easier said than done, however. Describing the properties of a program can be difficult, but there are a few general strategies, as described by Fred Herbert's excellent book on Property Testing in Erlang:
Modeling: Modeling involves reimplementing your algorithm with a simpler (though likely less efficient) one. Our
biggest function for example could have it's output compared with an algorithm that uses
sort to arrange a list in ascending order, then returns the final element.
Sort is far less time-efficient, O(n log n) compared to the O(n) of our
biggest function, but since it retains the same properties of
biggest we can use it as a model to test our results against.
Equivalent Statements: Equivalent statements are used to reframe the property into a simpler one. For instance, we could say that the element returned by
biggest is larger than or equal to any of the remaining elements of the input list. This simplified property is not quite the same but fundamentally equivalent to the one we had defined above.
Symmetry: Some functions have natural inverses. The process of encrypting and decrypting data, for example, can be described by the following properties:
- The input encrypted, then decrypted will return the original input.
- The input decrypted, then encrypted will return the original input.
Oracles: Oracles involve using a reference implementation to compare your output against and, as such, are perhaps the best way to test the properties of your code. Oracles are most often used when porting existing code from language to another or when replacing a working implementation with an improved one.
Implementing property tests is not easy. It relies not only on describing the properties you wish to test against, but also on constructing generators to create the large, varied sets of randomized input to feed into your code. A single property may be tested hundreds of times, and generators will often create increasingly complicated inputs across these test iterations, or "generations" as they are called.
One can imagine that this randomly generated input could quickly become too unwieldy for the developer to make sense of. The failing case may contain large amounts of data irrelevant to what the actual cause of the failure. To help narrow things down to the true cause a property testing framework will often attempt to reduce, or "shrink", the failing case down to a minimal reproducible state. This usually involves shrinking integers down to zero, strings to
"", and lists to
Fortunately, there are a variety of language-specific property testing libraries currently available. StreamData, for example, is an Elixir property testing library - and candidate to be included into Elixir proper - that provides built in generators for primitive data-types as well as tools to create custom ones. Generators can even be used to generate symbolic function calls, allowing the possibility to fuzz and test transitions on a state machine.
As a final note, it should be mentioned that while property testing is a powerful tool, it is not a perfect solution. Describing the properties of a piece of code can be difficult, as can coming up with tests for those properties. Furthermore, these tests are reliant on well-made generators to come up with the varied and unexpected input, which in itself can be a difficult and time consuming task.
It is also important to note that more traditional testing should not be entirely eschewed for property tests. The real strength of property testing is in using generated input to automate all the tedious work of thinking up unusual edge cases and writing individual tests for them, and it is at its best when used with unit tests that check for unique edge cases or document unusual behavior.
At Tinfoil Security, we understand that thorough and effective testing is an essential part of creating of efficient and secure technology. If you have any questions or would like to let us know how property testing has helped in your projects, feel free to reach out at firstname.lastname@example.org.
By- August 27, 2013
The Tinfoil Team was at the RSA Conference earlier this year and we noticed two things: there was no secure WiFi, and someone had set up a Pineapple, a device that spoofs wireless networks and sits in the middle of your connection to harvest all unencrypted information going through. Pineapples are not unexpected in any security conference, but are something that people should be wary of on any wireless network, especially an unsecured one.
Luckily, it's fairly easy to setup a VPN and protect your laptop's or phone's communication. Traffic going over SSL/TLS get this sort of protection automatically (as long as you pay attention to certificate trust errors), but if you ever hit a URL over http your cookies and sessions just got hijacked. Your own VPN mitigates this, and you can set it up yourself. For this guide I'm going to be setting up OpenVPN on a DigitalOcean box running Ubuntu but it should be similar on other platforms. Don't have a DigitalOcean account yet? Sign up here.
The Automatic Way
We've created a handy automated script to create a VPN. You can find the script on GitHub.
Download the script, and then run it:
chmod +x openvpn.sh
View the configuration file:
Copy/Paste it into OpenVPN and you're good to go!
The Manual Way
First thing's first, log into the box and get the basics up and going.
Once you've got the basic infrastructure in place, install OpenVPN:
sudo aptitude install openvpn
And extract the basic configuration files examples:
sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
sudo gzip -d /etc/openvpn/server.conf.gz
Next up, it's time to set up the authentication before we head back to customize the config. I recommend setting up a quick Public Key Infrastructure -- essentially your own SSL certificates. You'll want a key and certificate for the VPN and keys/certificates for the clients.
On your VPN:
sudo openssl req -newkey rsa:4096 -keyout /etc/openvpn/vpn-key.pem -out vpn.csr
On your CA's environment (hopefully elsewhere):
openssl x509 -CA cacert.pem -CAkey cakey.pem -CAcreateserial \ -days 730 -req -in vpn.csr -out vpn-cert.pem
Then transfer your cacert.pem and vpn-cert.pem files back to the VPN box into /etc/openvpn.
Create an additional shared-secret for extra security.
sudo openvpn --genkey --secret /etc/openvpn/ta.key
Next, create some Diffie-Hellman parameters for OpenVPN. Change 4096 to your key size if it's smaller, and note that a 4096 bit prime will take awhile to find on an EC2 micro.
sudo openssl dhparam -out /etc/openvpn/dh4096.pem 4096
Lock down the private files:
sudo chmod 600 /etc/openvpn/vpn-key.pem
sudo chmod 600 /etc/openvpn/ta.key
Create the tunnel so your VPN acts as a NAT for the clients
sudo sysctl -w net.ipv4.ip_forward=1
Also uncomment the line in /etc/sysctl.conf to persist this across restarts.
sudo iptables -I FORWARD -i tun0 -o eth0 -s 10.8.0.0/24 -m conntrack --ctstate NEW -j ACCEPT
sudo iptables -I FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
sudo iptables -t nat -I POSTROUTING -o eth0 -s 10.8.0.0/24 -j MASQUERADE
Finally, modify the openvpn configuration (/etc/openvpn/server.conf) with your setup. Here's a quick example of the important pieces:
# Which TCP/UDP port should OpenVPN listen on? port 80 # TCP or UDP server? proto udp # "dev tun" will create a routed IP tunnel, dev tun # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. ca cacert.pem cert vpn-cert.pem key vpn-key.pem # This file should be kept secret # Diffie hellman parameters. dh dh4096.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. push "dhcp-option DNS 220.127.116.11" push "dhcp-option DNS 18.104.22.168" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. tls-auth ta.key 0 # This file is secret # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. user nobody group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 4
Whew, you're almost there now! Create some client certificates and configure your computer or phone to connect. Our favorite VPN application is Viscosity. As you try out a connection, tail /var/log/syslog for connection errors. LZO compression and the ta.key/direction are especially important to get a stable connection.
Now you hopefully have a VPN set up. It may be a tad bit slower than without protection, but your data will be heavily encrypted over the public WiFi. Got any suggestions or OpenVPN configurations for other clients? Let me know!
By- July 11, 2013
Securing your stack with a SSL/TLS doesn't have to be costly or time consuming. There are two sides to TLS: Public Key Infrastructure (PKI) and Public Key/Asymmetric Cryptography. Hopefully your public network data is already encrypted with SSL/TLS (leveraging Public Key Cryptography), but you can also leverage a PKI to easily gain additional security wins for your internal network traffic.
SSL/TLS is most frequently used for encrypting data on the wire. You want to prevent the person next to you at Starbucks from being able to see your password as you submit a login form on the WiFi. It is used to verify the identity of a web server -- peer verification. You want to know that data going to and from www.tinfoilsecurity.com actually is coming from our servers, and not from someone else's. Early on in the TLS handshake your browser (and any underlying library like OpenSSL) does most of the heavy lifting for you: checking that the certificate's CommonName or SubjectAltNames matches the hostname and verifying the chain of trust. However, a tragically rarely used feature of SSL/TLS is that the client also has the opportunity to present a certificate during the handshake and the server can use it to verify the client's identity as well.
Why doesn't every browser present a certificate to every SSL website identifying the visitor as me? It certainly would make logins simple, but the rub comes with trust -- any certificate could identify the user as Ben Sedat. Public Key Infrastructure tries to solve this, and if both the website and client trust an authority to sign the certificates and keep track of them then you can establish a trusted link and make it a lot harder to forge fake things into the system.
Websites do this by requesting a certificate from a set of known Certificate Authorities. Browsers and operating systems keep lists of valid CAs and alert you when the certificate's chain of trust isn't valid. These CAs charge for their services, usually anywhere from $10 to $100 a year. Both you and your website's visitor trust the CA and everything works well, at least until it doesn't. What isn't widely publicized is that you can easily set up your own CA.
This CA isn't going to be trusted by your website visitors, but your infrastructure can trust it as long as you keep it safe. All too often we see companies and services disabling SSL entirely. They think that they need an expensive certificate, signed by a well-known CA. In the case of a website with clients that aren't under your control, that may be necessary. However, if you have more direct control over the client (like if you're your own API client) you can establish who you want to trust. Many libraries, like the Ruby hooks around OpenSSL and SSL connections allow you to specify which certificate authorities you want to trust in addition to the system defaults.
There isn't a perfect security system but you can easily add hurdles to delay and frustrate an attacker. SSL encryption of internet traffic is one such hurdle, and enforcing client/server identities on top creates one more hoop an attacker has to jump through to be successful. A client certificate isn't the be-all-end-all either -- it can be combined with usernames/passwords, two-factor authentication on another device, or all sorts of other methods to harden a system even more.
Go Forth and Make Your Own CA
OpenSSL is the de-facto standard command line utility for dealing with certificates. At Tinfoil we also really like XCA (http://sourceforge.net/projects/xca/); it's open-source and provides a (mostly) usable UI over the somewhat arcane OpenSSL invocations. Creating a CA and some certificates is easy in XCA, and once you're done you can have pieces of your application authenticating each other and keeping the communication secure.
Creating a certificate authority
First secure your XCA database (and private keys) with good password(s). Keep these safe, and don't put it anywhere digital! This is a critical layer of security for your CA.
Generate your CA's private key
Choose a nice long key here. There's some debate over key sizes regarding performance but it's worth it to go big here unless you're constrained by an embedded device. 1024-bit is considered too small, with 2048-bit being better currently but eventually breakable. 4096-bit may take a while to generate the key but it's worth it. It's also worth noting that RSA keys are more performant while encrypting, while DSA is faster for signing requests. ECDSA uses elliptic curve, which allows for smaller keys but is more computationally intensive.
The equivalent openssl incantations (for newer OpenSSL versions it may be easier to use the
openssl genrsa -des3 -out blogCA-key.pem <bytes>
openssl dsaparam -out blogCA-params.pem
openssl gendsa -des3 -out blogCA-key.pem blogCA-params.pem
openssl ecparam -out blogCA-params.pem
openssl ecparam -genkey -in blogCA-params.pem -out blogCA-key.pem
openssl ec -des3 -in blogCA-key.pem -out blogCA-key.pem
Generate your CA's public certificate
Choose the CA template and Apply All of the extensions. This flags your new certificate as a Certificate Authority, which some clients care about when evaluating the chain of trust. Also give it a commonName, essentially an identifier for your certificate, and choose an end date fairly far in the future.
Via the command line:
openssl req -new -x509 -days 730 -key blogCA-key.pem -out blogCA-cert.pem
Generate a server certificate
You can generate the private key, request, and certificate for a server application all within XCA. For bonus points, you can generate the private key and request on your server/clients directly, transfer just the request over, sign it with the CA and transfer the certificate back to the server. This prevents the private key from ever going on your network, although secure tunnels like SSH can make this process safer.
Create the private keys via CLI:
openssl req -newkey rsa:<bytes> -keyout server1-key.pem -out server1.csr
openssl req -newkey dsa:blogCA-cert.pem -keyout server1-key.pem -out server1.csr
openssl req -newkey ec:blogCA-cert.pem -keyout server1-key.pem -out server1.csr
Signing the request via CLI:
openssl x509 -CA blogCA-cert.pem -CAkey blogCA-key.pem -CAcreateserial \ -days 365 -req -in server1.csr -out server1-cert.pem
You can repeat this process for your servers, and/or your clients. Now you have all the credential pieces to have pieces of your infrastructure reveal themselves and verify each other during communication.
Validating the trust
Now it's time to use the certificates in your infrastructure. Standard peer verification from the client's perspective checks against the requested hostname, but in (cloud) infrastructure that could be a frequently changing target. One option is to use a load balancer behind a CNAME. Another option when you are your own client is to check the common or subject names in the provided certificate and validate against those and your certificate authority.
There can be some gotchas in the minefield of secure SSL communication, especially as we're using the commonName as a service identifier rather than just matching the hostname. Some examples below perform peer verification from the client's perspective with that in mind. Feel free to send along any equivalent ones in other languages and I'd be happy to credit and include them.Sample ruby client code, with sockets:
require 'socket' require 'openssl' context = OpenSSL::SSL::SSLContext.new context.verify_mode = OpenSSL::SSL::VERIFY_FAIL_IF_NO_PEER_CERT store = OpenSSL::X509::Store.new store.add_file 'blogCA-cert.pem' context.cert_store = store context.verify_callback = proc do |preverify, ssl_context| raise OpenSSL::SSL::SSLError.new unless preverify && ssl_context.error == 0 end socket = TCPSocket.new('service.example.com', 8080) ssl_connection = OpenSSL::SSL::SSLSocket.new(socket, context) ssl_connection.connect ssl_connection.post_connection_check('server1') # Check the CommonName or AltSubjectNames # SSL Connection Ready for Use Now
require 'net/http' require 'openssl' store = OpenSSL::X509::Store.new store.add_file 'blogCA-cert.pem' Net::HTTP.start('service.example.com', 8080, :use_ssl => true, :verify_mode => OpenSSL::SSL::VERIFY_NONE) do |http| # Our own verification, as net/http uses hostname validation. Sadly, the flag for this (https://github.com/ruby/ruby/blob/trunk/lib/net/http.rb#L658) is unchecked raise 'Possible MITM' unless OpenSSL::SSL.verify_certificate_identity(http.peer_cert, 'server1') raise 'Bad trust chain' unless store.verify(http.peer_cert) # SSL Connection Ready for Use Now end
It's worth taking a moment to go over some basic CA security. Like a real CA, you need to keep your CA's passphrase and private key secret. I recommend taking a page from Bitcoin wallet private key protection and put the private key on some cold storage, like a VM or flash drive that you can keep entirely off the internet.
Expiration dates are the other gotcha of this type of security. Generating new keys and certificates should be easy now, so you can use a shorter certificate lifespan before you have to rotate them. If you use a calendaring solution it's worth it to put in a reminder a few weeks before the certificate expires so you remember to renew. Don't get caught unawares.
That's the basics of creating and using your own CA! For extra credit, you can post your CA's certificate revocation list someplace and check against that as well. Certificate Authorities are fairly easy to set up and provide extra hurdles of identification if someone has intruded on your network and wants to man-in-the-middle your network traffic.