RSA Conference 2019: Questions, Comments, Concerns?

What Just Happened?!

A little over a month ago, the Tinfoil Security team was out in full force at RSA. We met and talked to over a thousand of you. We were ecstatic about these conversations as what we have been working on fell in line with what developers, CISO’s and security engineers see as relevant and essential. The fact that security continues coming to the forefront is good for everyone as a whole. The more security awareness that is out there, the more the expectation for security will rise. With that, the companies that have poor security practices will lose. Consumer information will be protected with better and improving security practices and processes.  

The Takeaways

There were a few things that we believe stuck out the most at the conference. First, GDPR: what is going on with it, how do we comply, and do we need to comply? These were a few of the questions floating around regarding GDPR. There has been a new sub-industry created by GDPR. At this year’s RSA, there were many consulting, auditing or assessment companies explicitly playing in the GDPR “field.” We gained some insight into GDPR as far as it applies to companies moving forward. In a general sense, that there is a lack of a clear plan or outline for compliance to GDPR. California has come up with its own new privacy law that has companies concerned and confused, as well. The sentiment that I gained from this is that there will be a rather large learning curve when it comes to tech companies and GDPR compliance.

The second theme was the adoption of the idea of Security of DevOps or a security-first mindset when developing applications. This idea was pushed further into the light by our very own CEO, Ainsley, with her talk at RSA (“Building Security Processes for Developers and DevOps Teams”) which highlighted how the world of technology and development has changed. Briefly, the pace at which we push code is impossible to keep up with security-wise using old processes, technologies, and teams. Tackling this monumental task means building a multidisciplinary team to handle security and development for your applications. Her talk also focused on how to bridge the communication gap between operations, security, and development, breaking down the walls that currently exist.

The last hot topic was IoT and securing IoT devices. Few companies know how to build secure internet connected devices. As the adoption of IoT devices is ramps up, consumers are left less secure and have a wider attack surface. For the time being, consumers should consider being skeptical of IoT devices and take into consideration the increased level of vulnerability they are opening themselves up to. There should be a few fundamental considerations made by any company that wants to play in the IoT space. At a minimum, companies should find where their threats lie and build processes to reduce those threats. Releasing information to consumers around what they’re working on fixing and what the consumer is still vulnerable too helps as well. There are currently no laws we’re aware of that would force companies to disclose that information to consumers, but companies like Synology, 1Password, and Microsoft do an outstanding job of keeping their consumers up to date about what they're working on and what they have remediated.

In Summary

This years’ RSA Conference centered around a few main points that were reiterated throughout the entire conference. GDPR is here to stay, so how do companies cope with it? Security for DevOps was a major concern with them as well, with questions focused on how to get teams to work together, whether they are focused on development or security. Additionally, how can we automate some of the processes involved in developing secure applications? We can help with this point. Lastly, the overall impact of IoT was ever-present, with everyone asking questions around where security in the IoT space is going and how we believe it needs to get there. Did you find these same themes echoed at RSA? Are there things we should know or ideas we missed? Let me know



Nicholas Bates

Nicholas is The Writing Writer, or Tinfoil Security's new Technical Content Writer. He has a background in network security where he worked as an engineer for 7 years before joining the Tinfoil team. As a father of his 3-year-old daughter, when he is not writing you can find him hiking with his family, or practicing Jiu-Jitsu.

Tags: security DevSec DevOps


API Security Scanning: How is it done the right way?

We’re excited to announce our API Security Scanner has been officially launched and is now publicly available! It’s a much needed tool we’ve been building and rigorously testing for the past year and a half, and we can’t wait to start sharing it with the world. Before we go into the details on how the scanner works, it’s important to start by discussing the problem of API security in general, and why such a tool is needed in the first place.

First, when we say API, it’s worth clarifying that we’re talking about web-based APIs such as REST APIs, web services, mobile-backend APIs, and the APIs that power IoT devices. We are not targeting lower-level APIs like libraries or application binary interfaces. This is an important distinction to make, because the sorts of security vulnerabilities that affect web-based APIs are going to mirror the same categories of vulnerabilities we’ve spent the past seven years defending against, with our web application security scanner.

Just as web applications can be vulnerable to issues like Cross-Site Scripting (XSS) or SQL injection, APIs can also fall prey to similar attacks. As always, it isn’t quite that simple, and the nuances of how these vulnerabilities are actually exploited and detected can vary dramatically between the two types of applications. In the case of XSS, for example, the difference between a vulnerable API and a secure API depends not only on the presence of attacker controlled sinks in an HTTP response, but also on the content-types of the responses in question, how those responses are consumed by a client, and whether sufficient content-type sniffing mitigations have been enforced.

Also worthy of consideration is how APIs handle authentication, especially as compared to web applications. In the case of web applications, authentication is more or less a solved problem. For the most part, the user visits a page with a login form, enters their credentials, submits the form, and gets back a cookie. There are minor variations to this -- sometimes people store the session in local storage or session storage, for example -- but for the most part, every web application authenticates in pretty much the same way. APIs, on the other hand? Not so much. At an absolute minimum, you need to account for protocols like OAuth2 (and all of its associated grant types!), OpenID Connect, and increasingly, JSON Web Tokens (JWT). Beyond that, it’s also common to layer on other security requirements, like client certificates, or signed requests. Existing web application security scanners have no concept of any of these standards, and even if you managed to get a scanner to authenticate to your API, you’re not going to have much luck coercing it into properly signing your requests.

Lastly, unlike web applications, APIs aren’t discoverable. Unless you’re one of the dozen companies in the world with a HATEOAS based API, it simply isn’t possible for a security scanner to load up your API, follow all of the links, and automatically discover all of the endpoints in that API, let alone the parameters expected by those endpoints, and any constraints required of them. Without some way of programmatically acquiring this information, API security scanning simply can’t be automated in the same way that web scanning has been.

These are all solvable problems, but they mean that a dynamic security scanner needs to be built from the ground up to understand APIs, how APIs are used, and more importantly, how APIs are attacked. This means that simply repurposing an existing web-application security scanner won’t be sufficient (which is what most other solutions currently do). With this point in mind, our API scanner is an entirely new scanning engine (written in Elixir!), built off of everything we’ve learned over the past seven years of attacking web applications.

To handle the previously mentioned authentication issues, we’ve devised a clever system using something we like to call authenticators. Essentially, we’ve distilled API authentication down to its primitives: whether that’s as simple as adding a header or a parameter to a request, or performing an entire OAuth2 handshake and storing the received bearer token for later. From there, our scanner is able to chain together all of these authenticators together, incrementally transforming unauthenticated requests into authenticated requests. Furthermore, because our scanner has such a nuanced understanding of all the discrete steps of an authentication workflow, it becomes possible to detect when any of those steps have failed, and also when any of them aren’t being honored by the server. This uniquely enables us to fuzz the individual steps of an authentication flow, providing us a powerful tool for determining authorization and authentication bypasses.

To address the discoverability issues inherent with APIs, we approached the problem the same way humans do: with documentation! As a developer looking to use a third-party API, your first stop is always the documentation for that API. Historically, this documentation has almost always been presented as unstructured text, and in a form not conducive to being parsed by software. With standards like Swagger, RAML, and API Blueprint becoming more widespread over recent years, the idea of programmatically specifying an API’s behavior is becoming increasingly popular, and this offers an exciting opportunity for API security scanning. In our experience, we’ve found that Swagger in particular is beginning to win out as the de facto standard for API documentation, and so we’ve designed the first version of our API scanner to ingest Swagger documents, and use them to build a map of an API for scanning.

Reading in documentation like this nicely solves the issue of being unable to crawl an API, but it also allows us to scan APIs with a level of intelligence that black-box dynamic web application scanning has never had access to. In most variants of web application scanning, the scanning engine crawls the application to determine all available input vectors: forms, links, buttons, really anything that might trigger some login on the client or server. From there, these inputs are fuzzed to look for security vulnerabilities. The issue, then, is that because this is entirely black box scanning, it becomes difficult for a scanner to ensure it is generating good payloads to send to the web application. By this we mean payloads that, while still being malicious, conform to the format and structure expected by the application. We could send a server every variation of SQL we can think of, but if the server is blocking our requests because they fail the first level of input validation, then we’re never going to make any progress. Our web application scanner actually addresses this very problem by examining the context in which parameters are used, in order to infer their expected structure. By sidestepping this problem entirely with API scanning, we’ve found that we’re able to more easily achieve an even higher level of coverage typically reserved for highly-skilled, manual penetration testing.

By parsing Swagger documentation, though, this problem can be cleverly avoided. Now, in addition to knowing the endpoints to scan, and the parameters on those endpoints, we’re also aware of the types of those parameters and whatever other constraints are specified in the Swagger documentation. It becomes possible for us to know that a given parameter needs to be a string, resembling an email address, of a specific length, and possibly excluding certain characters. Given all of this information, we can begin intelligently generating attack payloads that conform to various subsets of these constraints, allowing us to audit for holes in the server’s intended validation logic, while also giving a suitable jumping off point for intentionally trying to bypass that validation logic with cleverly constructed payloads.

It’s been a long road to get to this point, but we’re proud to have finally built an API security scanner that approaches the problem from a strong foundation, and with careful thought put into what makes API security scanning difficult. We have a lot of enhancements to make, but what we’ve been shipping to customers over the past year has already filled an important gap in their application security program -- especially with our ever present focus on integrating security scanning into the DevOps process. Just as with our web application scanner, our API scanner is designed to be integrated directly into the software development life-cycle, so that developers can find and fix vulnerabilities as early as possible, and often without waiting for a dedicated security engineer to get involved. We facilitate this with first-party integrations for tools like Jenkins, and also by providing a REST API that can drive the entire scanning and reporting process, from start to finish.

Security is much too important to be dealt with as an afterthought. That’s why we always strive to enable our customers push their security up the stack, so they can empower their developers to find and fix vulnerabilities before they become a problem.

Interested in setting up a demo to see for yourself? Find a time that works for you, and schedule a demo right here.


Quinn Wilton

Quinn Wilton is the Grand Magistrate of Security at Tinfoil Security, and the company's resident programming language theorist. When she isn't coding in a functional language like Elixir, she's probably hacking on an interpreter for an esolang of her own, or playing around with dependent types in Idris. Security is always at the forefront of her thoughts, and she enjoys building tools which make it easy for other engineers to write secure code. Her love for security is matched only by her love for bad movies - and does she ever love bad movies.

Tags: XSS security


Just Behave Already: Property Testing

What is property testing? In short, it can be described as a method of testing output of a program against the expected behavior, or properties, of a piece of code. Why should you care? The same reason we here at Tinfoil Security care: good testing goes beyond ensuring your code is functional. It can be crucial line of defense when it comes to the security of your applications, and property testing is a uniquely powerful tool in accomplishing these ends. But before we dive deeper, lets review more traditional testing.

it "finds the biggest element in the list" do  assert 5 = biggest([5])  assert 6 = biggest([6, 5])  assert 100 =    100..1    |> Enum.to_list()    |> biggest()end

The test above is fairly straightforward. It attempts to check that the biggest function will in fact return the biggest element of a provided list. It falls short in a few noticeable ways: What if the list is empty? What if it isn't sorted? What if there are duplicate integers?

Traditional testing very often focuses on specific examples and is dependent on the assumptions of the programmer. The function above may have been created with only positive integers in mind and it may not occur to the writer to test for cases involving negatives.

This example is a simple one, but it demonstrates a major drawback of traditional testing: it reveals the presence of expected bugs, rather than the absence of unexpected bugs. How would we pursue the latter? Enter property testing.

What is a Property?

Property testing is reliant on describing the properties of a function or piece of code. Very simply, these properties are general rules on how a program should behave. For the example function above, we might define the following:

  • "biggest returns the largest element of a list."

We might describe the properties of other well-known algorithms as such:

  • "sort returns a list with every element in ascending order."
  • "append returns a list with a length equal to the sum of the lengths of both lists passed to it."
  • "append returns a list with every element of of the first list, followed by every element of the second list."

Once we have defined the general properties of a program we can move beyond specific examples. From there we can use generators to test the output of our code against these properties using a variety of generated inputs.

How to Describe a Property?

This is easier said than done, however. Describing the properties of a program can be difficult, but there are a few general strategies, as described by Fred Herbert's excellent book on Property Testing in Erlang:

Modeling: Modeling involves reimplementing your algorithm with a simpler (though likely less efficient) one. Our biggest function for example could have it's output compared with an algorithm that uses sort to arrange a list in ascending order, then returns the final element. Sort is far less time-efficient, O(n log n) compared to the O(n) of our biggest function, but since it retains the same properties of biggest we can use it as a model to test our results against.

Equivalent Statements: Equivalent statements are used to reframe the property into a simpler one. For instance, we could say that the element returned by biggest is larger than or equal to any of the remaining elements of the input list. This simplified property is not quite the same but fundamentally equivalent to the one we had defined above.    

Symmetry: Some functions have natural inverses. The process of encrypting and decrypting data, for example, can be described by the following properties:

  • The input encrypted, then decrypted will return the original input.
  • The input decrypted, then encrypted will return the original input.

Oracles: Oracles involve using a reference implementation to compare your output against and, as such, are perhaps the best way to test the properties of your code. Oracles are most often used when porting existing code from language to another or when replacing a working implementation with an improved one.

Implementation

Implementing property tests is not easy. It relies not only on describing the properties you wish to test against, but also on constructing generators to create the large, varied sets of randomized input to feed into your code. A single property may be tested hundreds of times, and generators will often create increasingly complicated inputs across these test iterations, or "generations" as they are called.

One can imagine that this randomly generated input could quickly become too unwieldy for the developer to make sense of. The failing case may contain large amounts of data irrelevant to what the actual cause of the failure. To help narrow things down to the true cause a property testing framework will often attempt to reduce, or "shrink", the failing case down to a minimal reproducible state. This usually involves shrinking integers down to zero, strings to "", and lists to [].

Fortunately, there are a variety of language-specific property testing libraries currently available. StreamData, for example, is an Elixir property testing library - and candidate to be included into Elixir proper - that provides built in generators for primitive data-types as well as tools to create custom ones. Generators can even be used to generate symbolic function calls, allowing the possibility to fuzz and test transitions on a state machine.

Conclusion

As a final note, it should be mentioned that while property testing is a powerful tool, it is not a perfect solution. Describing the properties of a piece of code can be difficult, as can coming up with tests for those properties. Furthermore, these tests are reliant on well-made generators to come up with the varied and unexpected input, which in itself can be a difficult and time consuming task.

It is also important to note that more traditional testing should not be entirely eschewed for property tests. The real strength of property testing is in using generated input to automate all the tedious work of thinking up unusual edge cases and writing individual tests for them, and it is at its best when used with unit tests that check for unique edge cases or document unusual behavior.

At Tinfoil Security, we understand that thorough and effective testing is an essential part of creating of efficient and secure technology. If you have any questions or would like to let us know how property testing has helped in your projects, feel free to reach out at contact@tinfoilsecurity.com.


Peter Ludlum

Peter is a Software Engineering Intern at Tinfoil Security. A recent graduate of App Academy, he enjoys nothing more than bringing beautiful (and functional) web pages to life. When he isn't coding, Peter is usually lost in a book or strumming out a new tune on the ukulele.

Tags: security guide DevSec DevOps


Strutshock: Apache Struts 2 (RCE CVE-2017-5638) in Plain English

If you’ve paid attention to the news recently, you’ve certainly heard about the massive breach at Equifax, now estimated to have put 145 million of people’s sensitive personal data out into the world. You’ve probably also heard it was due to unpatched software. So what was the issue, why does it matter, and what can you do? Let’s start with the basics.

In vulnerable versions of Struts 2, there is a bug in how errors are handled when the server receives a request containing a Content-Type, Content-Length, or Content-Disposition header. With an unexpected or malformed request, you would expect the server to send back a 400 error, or otherwise ignore the request. However, with a vulnerable version of struts2, the Jakarta Multipart Parser will parse that header string and, if it contains executable OGNL code, execute it. What, then, is OGNL and why would it matter if it parses and executes some code? OGNL, or Object Graph Navigation Language, is a Java library which, on the development side, provides convenient tools for getting and setting values on Java objects. For instance, connecting the forms on your web page to the struts representation of that data on the server. Being able to inject commands through OGNL means an attacker can then substitute values on the server such that it will run whatever malicious code the attacker wishes to send.

Short version: malicious Content-Type/Length/etc. header -> Jakarta Multipart parser error -> OGNL receives the header as executable code -> server now runs malicious code as if it were part of the original program. So if we have a site we think is vulnerable, we could set up a script that will send one of these headers that spawns a bash shell and takes a command from the user. Have a look at the video for a quick demo.

As you can see, this gives us quite a bit of power over the system. We can traverse through directories, create files, view environment variables (which in turn gives us access to the database), and even cause the site to crash by deleting necessary files.

The Danger

If a server is running a vulnerable version of Struts 2, an attacker who knows how to exploit this might as well have your server running locally on their computer. They can access the source code of your site. If your site has access to a database, so do they. If they want to tell your server to sleep, putting you out of business for a while, they can. Even if your server has limited privileges, or the site runs on a virtual machine, the attacker might be able to learn something about where your sensitive data actually lives that could be leveraged to launch an attack where it will cause the most harm.

The Solution

You probably hear it all the time, but keep your software up to date. Test your production code and make sure you’re able to roll out the most up to date versions of the libraries and technologies you rely upon as quickly as you can once they are available. The fix to this vulnerability is available in Struts 2 versions 2.5.10.1 or 2.3.32 and later, both of which have been available since March. For note, the Equifax breach happened at the end of July, well after the tools were available to fix the vulnerability in question. Applying the patch is simpler than trying to roll your own header validation and far quicker.

If you don’t know if you’re vulnerable, or if you think you’ve set up an effective solution, you can check using our free service here. If you’re already using our web scanner product, you’re already being checked for the Strutshock vulnerability in your scans, along with many other common vulnerabilities. Thanks for reading, and stay secure.


Alex Bullen

Alex is Tinfoil Security's Top-Shelf Programmer (and fetcher of things from high shelves). A former psychology wonk and recent App Academy grad, Alex endeavors to treat every challenge as an opportunity to improve his code-fu. When not busily building blocks of precisely put code, you can find him reading fantasy novels or practicing kung fu.

Tags: security vulnerabilities


Subresource Integrity

If you’ve been keeping up with recent browser developments you may have noticed that in the past few weeks both Chrome and Firefox have started to support subresource integrity, with companies like Github making an active push for other sites to make use of the functionality. This is a low-risk change that offers tremendous security gains for your users, so we just pushed out an update that makes it easier to start using subresource integrity on your website.

Subresource integrity is a new browser feature that allows websites to ensure the integrity of resources loaded from external sources, such as content delivery networks (CDNs). This is a common technique used by websites to speed up the loading of assets, including common Javascript libraries like jQuery.

As is the nature of loading arbitrary code, this has always opened up websites to the possibility of being attacked through their CDN: an insecure or malicious CDN holds the potential to insert malicious code onto any website to which it serves assets. Subresource integrity serves to mitigate this attack by ensuring that all loaded resources contain the exact content expected by the website. This is done through the use of a cryptographic digest, computed on all fetched resources, that is then compared against an expected digest. This provides the browser the capability of detecting resources that have been tampered with, allowing it the opportunity to abort the loading of the resources before any malicious code is executed.

Protecting a resource is done by adding the “integrity” attribute to an asset’s HTML tag:

<script src="https://example.com/include.js"
        integrity="sha256-Rj/9XDU7F6pNSX8yBddiCIIS+XKDTtdq0//No0MH0AE="
        crossorigin="anonymous"></script>

It’s an elegant solution to a very serious risk, and it’s a solution we recommend implementing. It won’t secure all of your users, with Microsoft Edge still not supporting the feature, but it can serve as a valuable line of defense in the event of a breach of your CDN. Many of the popular web frameworks provide libraries that make it easy to enable subresource integrity on your assets, and further instructions on making use of the technology are available on the Mozilla Developer Network.

Going forward, all Tinfoil Security scans will flag external resources that are not protected by subresource integrity. Give it a try by signing up for our 30-day free trial.


Quinn Wilton

Quinn Wilton is the Grand Magistrate of Security at Tinfoil Security, and the company's resident programming language theorist. When she isn't coding in a functional language like Elixir, she's probably hacking on an interpreter for an esolang of her own, or playing around with dependent types in Idris. Security is always at the forefront of her thoughts, and she enjoys building tools which make it easy for other engineers to write secure code. Her love for security is matched only by her love for bad movies - and does she ever love bad movies.

Tags: browser security security New Feature