RSA Conference 2019: Questions, Comments, Concerns?

What Just Happened?!

A little over a month ago, the Tinfoil Security team was out in full force at RSA. We met and talked to over a thousand of you. We were ecstatic about these conversations as what we have been working on fell in line with what developers, CISO’s and security engineers see as relevant and essential. The fact that security continues coming to the forefront is good for everyone as a whole. The more security awareness that is out there, the more the expectation for security will rise. With that, the companies that have poor security practices will lose. Consumer information will be protected with better and improving security practices and processes.  

The Takeaways

There were a few things that we believe stuck out the most at the conference. First, GDPR: what is going on with it, how do we comply, and do we need to comply? These were a few of the questions floating around regarding GDPR. There has been a new sub-industry created by GDPR. At this year’s RSA, there were many consulting, auditing or assessment companies explicitly playing in the GDPR “field.” We gained some insight into GDPR as far as it applies to companies moving forward. In a general sense, that there is a lack of a clear plan or outline for compliance to GDPR. California has come up with its own new privacy law that has companies concerned and confused, as well. The sentiment that I gained from this is that there will be a rather large learning curve when it comes to tech companies and GDPR compliance.

The second theme was the adoption of the idea of Security of DevOps or a security-first mindset when developing applications. This idea was pushed further into the light by our very own CEO, Ainsley, with her talk at RSA (“Building Security Processes for Developers and DevOps Teams”) which highlighted how the world of technology and development has changed. Briefly, the pace at which we push code is impossible to keep up with security-wise using old processes, technologies, and teams. Tackling this monumental task means building a multidisciplinary team to handle security and development for your applications. Her talk also focused on how to bridge the communication gap between operations, security, and development, breaking down the walls that currently exist.

The last hot topic was IoT and securing IoT devices. Few companies know how to build secure internet connected devices. As the adoption of IoT devices is ramps up, consumers are left less secure and have a wider attack surface. For the time being, consumers should consider being skeptical of IoT devices and take into consideration the increased level of vulnerability they are opening themselves up to. There should be a few fundamental considerations made by any company that wants to play in the IoT space. At a minimum, companies should find where their threats lie and build processes to reduce those threats. Releasing information to consumers around what they’re working on fixing and what the consumer is still vulnerable too helps as well. There are currently no laws we’re aware of that would force companies to disclose that information to consumers, but companies like Synology, 1Password, and Microsoft do an outstanding job of keeping their consumers up to date about what they're working on and what they have remediated.

In Summary

This years’ RSA Conference centered around a few main points that were reiterated throughout the entire conference. GDPR is here to stay, so how do companies cope with it? Security for DevOps was a major concern with them as well, with questions focused on how to get teams to work together, whether they are focused on development or security. Additionally, how can we automate some of the processes involved in developing secure applications? We can help with this point. Lastly, the overall impact of IoT was ever-present, with everyone asking questions around where security in the IoT space is going and how we believe it needs to get there. Did you find these same themes echoed at RSA? Are there things we should know or ideas we missed? Let me know

Nicholas Bates

Nicholas is The Writing Writer, or Tinfoil Security's new Technical Content Writer. He has a background in network security where he worked as an engineer for 7 years before joining the Tinfoil team. As a father of his 3-year-old daughter, when he is not writing you can find him hiking with his family, or practicing Jiu-Jitsu.

Tags: security DevSec DevOps

You've Got the Swagger, We've Got the SaaS

API security scanning changes today. Tinfoil Security is launching our new patent-pending API Scanner! After astounding feedback from developers leveraging our scanner and rigorous testing, we are proud to offer our scanner to the public. We are giving developers and companies the ability to scan and secure APIs with two different methods of deployment: on-premise and SaaS. We are very excited to invite you to see the API Scanner in action. 

As a clarification, we’re talking about web-based APIs such as REST APIs, web services, mobile-backend APIs, and the APIs that power IoT devices. We are not targeting lower-level APIs like libraries or application binary interfaces - a crucial distinction to make. The sorts of security vulnerabilities that affect web-based APIs are going to mirror the same categories of vulnerabilities we’ve spent the past eight years defending against with our web application scanner.

We’ve built a security scanner that understands how APIs work from the ground up, how they’re used, and most importantly, how they’re attacked. Existing solutions use a web application scanner to scan APIs, but that approach does not take into account the unique nature of APIs. Just as web applications can be vulnerable to issues like Cross-Site Scripting (XSS) or SQL injection, APIs can also fall prey to similar attacks. It isn’t quite that simple, though, and the nuances of how these vulnerabilities are detected and exploited can vary drastically between the two types of applications. In the case of XSS, for example, the difference between a vulnerable API and a secure API depends not only on the presence of attacker-controlled sinks in an HTTP response, but also on the content-types of the responses in question, how a client consumes those responses, and whether sufficient content-type sniffing mitigations have been enforced.

APIs also aren’t discoverable. Unless you’re one of the dozen companies in the world with a HATEOAS-based API, it isn’t possible for a security scanner to load up your API, follow all of the links, and automatically discover all of the endpoints in that API. The parameters expected by those endpoints, and any constraints required by them, are even harder to discover. Without some way of programmatically acquiring this information, API security scanning simply can’t be automated in the same way that web scanning has been.

To deal with these discoverability issues in APIs, we looked at standards like Swagger, RAML, and API Blueprint. We’ve found that Swagger (now known as the OpenAPI Specification), in particular, is winning out as the standard for API documentation. In response, we’ve designed our API scanner to ingest Swagger documents and use them to build a map of an API for scanning. Doing so solves the issue of being unable to crawl an API. It also allows us to scan APIs with a higher level of intelligence than black-box dynamic web application scanning has ever been able to.

We have addressed authentication issues using something we call authenticators. We’ve distilled API authentication down to its foundations; whether that’s as simple as adding a header or parameter to a request, or as complex as performing an entire OAuth2 handshake and storing the received bearer token for later use. We’re then able to chain together all of the authenticators, incrementally transforming unauthenticated requests into authenticated ones. Having such a nuanced understanding of all the steps of an authentication workflow lets us detect when any of those steps have failed, and when the server isn't honoring any of them. This allows us to fuzz the individual steps of an authentication flow, providing a powerful tool for determining authorization and authentication bypasses.

Some features that will get any developer excited: 

  • Intelligent payload generation
  • A powerful REST API to control the scanner and its reports
  • First-class support for API authentication workflows. 

What we mean by intelligent payload generation is parameter fuzzing that takes into account the schemas of the parameters, e.g. types of parameters, constraints, whether the parameter is required, and valid inhabitations to name a few.

In addition to vulnerability scanning, our scanner also performs correctness checking and looks for bugs to reduce an API’s attack surface. With full integration into your existing CI/CD pipeline, we create and track issues and vulnerabilities easily, allowing your teams to focus on what’s most important to your organization.

You can, for the first time, secure your APIs with a scanner that was built specifically for APIs. We’re not just pointing our web application scanner at your API and calling it good. Our API scanning is intelligent and thorough. By using your API’s specification as an outline, we focus on security vulnerabilities as they manifest themselves within APIs. This means fewer false positives, a higher degree of coverage, and a better understanding of the risk posture posed by your APIs.

There is every reason to give our unrivaled API Scanner a test drive, and you can do it right now. For a more detailed feature breakdown, please refer to this post

Nicholas Bates

Nicholas is The Writing Writer, or Tinfoil Security's new Technical Content Writer. He has a background in network security where he worked as an engineer for 7 years before joining the Tinfoil team. As a father of his 3-year-old daughter, when he is not writing you can find him hiking with his family, or practicing Jiu-Jitsu.

Tags: Free Scan Launch DevSec DevOps

Just Behave Already: Property Testing

What is property testing? In short, it can be described as a method of testing output of a program against the expected behavior, or properties, of a piece of code. Why should you care? The same reason we here at Tinfoil Security care: good testing goes beyond ensuring your code is functional. It can be crucial line of defense when it comes to the security of your applications, and property testing is a uniquely powerful tool in accomplishing these ends. But before we dive deeper, lets review more traditional testing.

it "finds the biggest element in the list" do  assert 5 = biggest([5])  assert 6 = biggest([6, 5])  assert 100 =    100..1    |> Enum.to_list()    |> biggest()end

The test above is fairly straightforward. It attempts to check that the biggest function will in fact return the biggest element of a provided list. It falls short in a few noticeable ways: What if the list is empty? What if it isn't sorted? What if there are duplicate integers?

Traditional testing very often focuses on specific examples and is dependent on the assumptions of the programmer. The function above may have been created with only positive integers in mind and it may not occur to the writer to test for cases involving negatives.

This example is a simple one, but it demonstrates a major drawback of traditional testing: it reveals the presence of expected bugs, rather than the absence of unexpected bugs. How would we pursue the latter? Enter property testing.

What is a Property?

Property testing is reliant on describing the properties of a function or piece of code. Very simply, these properties are general rules on how a program should behave. For the example function above, we might define the following:

  • "biggest returns the largest element of a list."

We might describe the properties of other well-known algorithms as such:

  • "sort returns a list with every element in ascending order."
  • "append returns a list with a length equal to the sum of the lengths of both lists passed to it."
  • "append returns a list with every element of of the first list, followed by every element of the second list."

Once we have defined the general properties of a program we can move beyond specific examples. From there we can use generators to test the output of our code against these properties using a variety of generated inputs.

How to Describe a Property?

This is easier said than done, however. Describing the properties of a program can be difficult, but there are a few general strategies, as described by Fred Herbert's excellent book on Property Testing in Erlang:

Modeling: Modeling involves reimplementing your algorithm with a simpler (though likely less efficient) one. Our biggest function for example could have it's output compared with an algorithm that uses sort to arrange a list in ascending order, then returns the final element. Sort is far less time-efficient, O(n log n) compared to the O(n) of our biggest function, but since it retains the same properties of biggest we can use it as a model to test our results against.

Equivalent Statements: Equivalent statements are used to reframe the property into a simpler one. For instance, we could say that the element returned by biggest is larger than or equal to any of the remaining elements of the input list. This simplified property is not quite the same but fundamentally equivalent to the one we had defined above.    

Symmetry: Some functions have natural inverses. The process of encrypting and decrypting data, for example, can be described by the following properties:

  • The input encrypted, then decrypted will return the original input.
  • The input decrypted, then encrypted will return the original input.

Oracles: Oracles involve using a reference implementation to compare your output against and, as such, are perhaps the best way to test the properties of your code. Oracles are most often used when porting existing code from language to another or when replacing a working implementation with an improved one.


Implementing property tests is not easy. It relies not only on describing the properties you wish to test against, but also on constructing generators to create the large, varied sets of randomized input to feed into your code. A single property may be tested hundreds of times, and generators will often create increasingly complicated inputs across these test iterations, or "generations" as they are called.

One can imagine that this randomly generated input could quickly become too unwieldy for the developer to make sense of. The failing case may contain large amounts of data irrelevant to what the actual cause of the failure. To help narrow things down to the true cause a property testing framework will often attempt to reduce, or "shrink", the failing case down to a minimal reproducible state. This usually involves shrinking integers down to zero, strings to "", and lists to [].

Fortunately, there are a variety of language-specific property testing libraries currently available. StreamData, for example, is an Elixir property testing library - and candidate to be included into Elixir proper - that provides built in generators for primitive data-types as well as tools to create custom ones. Generators can even be used to generate symbolic function calls, allowing the possibility to fuzz and test transitions on a state machine.


As a final note, it should be mentioned that while property testing is a powerful tool, it is not a perfect solution. Describing the properties of a piece of code can be difficult, as can coming up with tests for those properties. Furthermore, these tests are reliant on well-made generators to come up with the varied and unexpected input, which in itself can be a difficult and time consuming task.

It is also important to note that more traditional testing should not be entirely eschewed for property tests. The real strength of property testing is in using generated input to automate all the tedious work of thinking up unusual edge cases and writing individual tests for them, and it is at its best when used with unit tests that check for unique edge cases or document unusual behavior.

At Tinfoil Security, we understand that thorough and effective testing is an essential part of creating of efficient and secure technology. If you have any questions or would like to let us know how property testing has helped in your projects, feel free to reach out at

Peter Ludlum

Peter is a Software Engineering Intern at Tinfoil Security. A recent graduate of App Academy, he enjoys nothing more than bringing beautiful (and functional) web pages to life. When he isn't coding, Peter is usually lost in a book or strumming out a new tune on the ukulele.

Tags: security guide DevSec DevOps

DevSec: Empowering DevOps with Security

We’ve had an interesting journey over the past few years at Tinfoil. Each year technology evolves, but security is often left behind. As more of you scan with us, we see more issues with current security processes. The biggest problem? Agile development.

We love pushing code often. We’ll make changes to our website or scanner on a daily (and sometimes hourly) basis. As a small team we’re able to run our security tests pre-deploy without a problem, but have noticed a large gap amongst our customers. Over the past year we’ve worked with some of the best and largest enterprise companies to establish a new security practice: empowering DevOps with security and bringing it closer to the source.

Security teams are small, much smaller than development teams. Some companies have a single person thinking about security issues and others have hundreds. A few years back either was OK. New code was deployed once every few months and went through a full QA and penetration testing process. With continuous integration systems and DevOps teams forming, security can’t keep up. New code is deployed daily, if not hourly, and security has yet to become a part of the testing process.

Tinfoil is now working with security teams to relieve some of the pressure so they can work on the big picture. Starting with our web application scanner, you can now pull Tinfoil into your DevOps process without adding any extra burden. Many customers use our API to hook into their CI systems, and we’re working on specific integrations with tools like Jenkins and CircleCI to make the process even easier. These integrations are set up so as soon as new versions of a website are pushed out, Tinfoil scans are run. As we find vulnerabilities we have many ways to export the data, including directly into your JIRA or issue tracker so your developers can fix the problems right away. Our goal is to simplify security while not compromising security.

DevSec is the joining of DevOps and security. Your engineers should feel empowered with security, not burdened by it. We’ll be posting a series of posts on best practices for getting your development teams up and running with a DevSec process. As always, you’re welcome to use Tinfoil to supplement this process. We welcome questions and feedback as we explore this new focus shift. Email or chat with us anytime.

Looking forward to more years of interesting adventures and challenges.

Ainsley Braun

Ainsley Braun is the co-founder and CEO of Tinfoil Security. She's consistently looking for interesting, innovative ways to improve the way security is currently implemented. She spends a lot of her time thinking about the usability and pain points of security, and loves talking with Tinfoil's users. She also loves rowing, flying kites, and paragliding.

Tags: security business advice new features DevSec DevOps