Synopsys acquires Tinfoil Security, DAST and API testing solutions provider
By- January 09, 2020
Original post can be found here.
Synopsys welcomes Tinfoil Security, whose DAST and API testing solutions broaden our market-leading security portfolio and strengthen the Polaris platform.
Today, Synopsys completed the acquisition of Tinfoil Security, an innovative provider of dynamic application security testing (DAST) and application programming interface (API) testing. This acquisition tightly aligns with the vision Synopsys began with when we established the Software Integrity Group over five years ago. Tinfoil’s solutions will broaden what is already the most comprehensive portfolio in the market and will strengthen the Polaris Software Integrity Platform™.
DAST is a staple of modern application security testing programs, and Synopsys has a distinguished history of delivering DAST via Managed Services. The next logical step to extend our capabilities is to bring in Tinfoil’s proven DAST tool, which readily integrates into DevOps workflows and empowers developers to engage in application security.
API security testing is a relatively new addition to security testing programs, and Tinfoil Security is an innovator in this emerging area. The Tinfoil API Scanner detects vulnerabilities in APIs, including those on mobile back-end servers and IoT devices, as well as RESTful APIs. This testing capability, built with the developer in mind, fits seamlessly into existing development processes.
Synopsys has been building a comprehensive, end-to-end portfolio for software security and quality, and the Tinfoil Security acquisition is an important step. With the addition of Tinfoil’s products, the Polaris platform will provide organizations all the tools they need to build secure, high-quality software, plus the integrations to do it faster.
The updated Synopsys Software Integrity portfolio features solutions with these capabilities:
- Static application security testing (SAST) to address security and quality defects with our proven solution for static code analysis.
- Software composition analysis (SCA) to identify vulnerabilities and license compliance issues in open source software, including a unique capability for testing binaries.
- Interactive application security testing (IAST) with active verification and sensitive-data tracking for web-based applications—an industry first.
- Comprehensive, automated black box fuzzing to discover security weaknesses in software, with more than 250 network protocols and file formats supported.
- Next-generation DAST to identify security vulnerabilities in web applications.
- API testing to test the RESTful APIs used to build web applications leveraging microservice architectures.
We’re excited to have the Tinfoil team join Synopsys, and we extend them a warm welcome. We also welcome Tinfoil Security customers and look forward to supporting their continued success.
Read the press release.
Shifting to an Integrated Security Approach
By- November 22, 2019
At Tinfoil, we’re going to spend this month taking time to care about one of the most important things - you! That’s right, this month is our month of empowering customers so they’re successful. We’ll be writing some blog posts and posting some videos to help make your security for DevOps implementation easier. If you have any questions you want answered about anything security or SecDevOps related (or kitten related, because we like those, too!), email firstname.lastname@example.org.
Our first post is a question we see a lot:
I get a lot of push back from developers on having to worry about security. They produce a lot of our security bugs. How can I fix this?
This is a great question! Security isn’t easy, otherwise everybody would be secure. To implement security across your development teams, you need to identify the root of the problem. Do your developers feel held back from building? Does it feel like there’s so much added process that it blocks development? Is there a barrier of intimidation to be tackled to really empower your developers? We’ve seen each of these cases, but understanding what’s happening in your organization makes all the difference.
The most common objection we hear from developers stems from one fact: security is hard… really hard! Oftentimes, this difficulty intimidates engineers. Perhaps there’s a fear of job security, or maybe the tools you’re providing your developers are too difficult to understand. Fear is most easily assuaged by good education. Turn security into a fun competition and you’ll often see developers come out of their shells, challenging themselves and you. See how your current security tools can integrate into the development process, be simplified, and be more easily understood. Do your tools provide a 97-page PDF? That’s probably not going to work for your development teams.
Sometimes, developers will push back on implementing security because it takes too much time. If the new process you’ve given your development team is blocking your engineers from creating something new, it’s best to add security as a process without making your developers ever _feel_ like there’s a new process. Hooking into your CI system so vulnerabilities are found with the existing testing process lets security get tested with all of their existing unit, regression, and integration tests. Inserting a vulnerability into their issue tracker as a normal bug allows your developers to start looking at it in the same place as all of their other features and bugs. Bonus points if your security tools provide remediation instructions for each vulnerability! Security will naturally become less intimidating, and your developers will learn to consistently produce more secure code.
The least common objection we see is complacency: “security doesn’t matter to me.” This is mostly an educational gap that needs to be worked through. In this rare case, understanding your team dynamic and what drives each team lead and member is crucial. Sometimes, gamifying security piques the team’s interests. Sometimes, the drive to learn, paired with educational classes on how vulnerabilities on the development team have been built in the past can help. Bonus and reward structures might help a more stubborn team.
Shifting from a traditional cybersecurity approach to an integrated security approach is tough. It takes management buy-in and a lot of hard work, but it pays off in the long run. You know your team better than we do, and each team can be pushed in many different ways, and we’re happy to help. Let us know how we can!
This post is from our month of empowering customers for success series. Have a question you want answered about anything security or SecDevOps related (or kitten related, because we like those, too!)? Email email@example.com.
Wormwood - An Explicit Way to Test Absinthe GraphQL APIs
By- October 23, 2019
We love GraphQL at Tinfoil! We use it extensively in our Elixir and Phoenix powered API scanner. We try to test out code using ExUnit whenever possible to help ensure a stable and smooth development cycle. Testing an Absinthe GraphQL API usually follows the pattern of: Setting up a ConnCase, making the request, and then validating that the data returned from the request was valid. A lot like the following:
This works well! But as a result we may accumulate a whole lot of boilerplate code for setting up these Plug Conns and parsing their results. Not to mention the frequent module attributes and strings used to contain our GraphQL queries inside the unit test module itself. We figured there’s probably a better way to write these sorts of tests that can leverage the power of the Absinthe library itself, rather than sending HTTP requests during a test run.
After some experimentation we created Wormwood, a small and open source Elixir library to assist with Absinthe GraphQL document testing in ExUnit. We can eliminate large chunks of Plug.Conn boilerplate, remove static strings of query code, explicitly scope a module to a single query document and schema, and call our GraphQL API like this instead:
In the above snippet, Wormwood is loading, parsing and validating the query document at compile time, then, it runs that loaded query against the specified schema using Absinthe itself. With this method of GraphQL testing, it’s super explicit which query document we are testing, it’s also clear which schema we are executing it against. We also gain the benefit to test against the errors that Absinthe can return at the different phases of the pipeline, and even control the pipeline itself! (More on that later.)
Using Wormwood in Your Own App
We’ll break down each of the steps for utilizing Wormwood in your own testing setup. You can get (and contribute to!) Wormwood on our GitHub! Wormwood is also available on the Hex package repository. Some of the example code shown in this post is available in the repo itself.
The first requirement for using Wormwood is to break out your queries into individual GQL files. While this may sound excessive, it offers a lot of power in terms of code coverage, and project organization. You can read more about using individual GraphQL files with Webpack in the Apollo docs, it’s pretty simple to set up. Once you have all your documents broken out into files, we can select the ones we want to test, and make accompanying ExUnit test modules for them.
Let’s say we have this GQL document:
And our imported fragment is just a simple set of reusable attributes, it looks like this:
We can then create a basic ExUnit test module we’ll call “get_users_test.exs”, and we’ll load our schema and the GQL document into the module using the “load_gql/2” macro.
Let’s break down what Wormwood is doing here:
When “load_gql/2” is executed it attaches two special attributes to the module it was called from, they are assigned the Absinthe schema from the first argument, and the full source text of the query file from the second argument. Wormwood will expand all import statements it can find, and will raise an exception if it cannot find a file or if it could not validate the syntax of the full query with Absinthe.
Now that our module has a schema and document assigned, we can query it using the simple “query_gql/1” function:
Our query results now live in the “query_data” variable. If we inspect it, we can see that our results are returned in vanilla Elixir lists and maps:
If you have a deeply nested structure, a good tip is to use the Elixir Kernel function “get_in\2”, which takes the structure you want to access data from, and a list of keys or access functions to retrieve specific members. For example, if I wanted to fetch the id of the first user in this big query result I could simply do the following:
More Advanced Queries
The above example is just a simple demo of how to use Wormwood in your testing suite. Wormwood supports a few more options and configurations when testing. Below is a quick list of the features, along with snippets to show how it’s done. You can also dig around in the examples folder on the GitHub repo.
Running a Query With Variables and Context
You can pass the same options keyword list you would pass to “Absinthe.run/3” into the “query_gql/1” function. Refer to the Absinthe docs on the exact options, and their usage. If we wanted to pass a variable into this query we can just leverage the options Absinthe provides:
The same can be done for context if you want to fake something like authentication:
Running a raw string, rather than a GQL file
Of course if you don’t want to break out your GQL documents into files, you can still assign them to a test module as a raw string. Rather than calling “load_gql/2”, call “set_gql/2”:
Wormwood will still expand import statements when using raw strings! They will be relative to the current working directory, which is usually your app root directory.
Running a Query With a Custom Absinthe Pipeline
If at any point you wish to modify the pipeline that Absinthe uses for executing a document loaded into a module, you can do so by composing a list of pipeline Phases, and passing them into the “query_gql_with_pipeline/2” function like so:
Wormwood was born out of specific quirks we ran into while testing our API Scanner, we hope you find it useful as well! It aims to help accelerate and improve the way GraphQL tests are written within ExUnit. If you have stars, issues or contributions, feel free to leave any of them on the official GitHub repo!
Announcing GraphQL Security Scanning
By- October 15, 2019
For the second time this year: API security scanning changes today. We’ve been working hard on adding support to scan GraphQL APIs for security vulnerabilities, best practices, and correctness. Earlier this year, the Tinfoil Security API Scanner initially launched with support for the Swagger documentation format, and we’re excited to expand coverage to now include GraphQL APIs. To be clear, we’re not deprecating support for OpenAPI scanning - in fact, OpenAPI specification v3 support is coming soon! We’ve enjoyed building our own GraphQL APIs to power our user interfaces and we felt the need to ensure their correctness as we built them. To that end we’ve added first-class GraphQL support to our API Scanner. We’re thrilled today to announce the beta of our GraphQL scanning capabilities at the GraphQL Summit conference in San Francisco.
We use GraphQL internally to iterate quickly with our user interfaces without huge changes to the backend server each time. (We’re hiring, if Elixir, GraphQL, and Vue are interesting to you, by the way). GraphQL makes it easy to decouple user interface needs from a backend API server by offering a buffet of data and relationships without restricting the format to a specific JSON payload. Nowadays UI developers can iterate quickly, but this puts extra load on API server engineers to make a performant, and most importantly safe, GraphQL API.
One huge advantage of GraphQL APIs is that they are self-documenting. Most GraphQL APIs can be introspected to pull out the types, fields, and mutations. This can make it a joy to work with a tool like GraphiQL to explore an API, but also makes it very easy to get started scanning. All you need to do is provide the GraphQL endpoint and the Tinfoil Security API scanner will do the rest. We automatically discover the different types, fields, arguments, and mutations exposed by your API, and generate an optimized set of documents to exercise all of the different aspects of your API.
In addition to searching for various injections (both direct and blind), we also look for GraphQL-specific concerns. One such concern is cycles in the query graph, potentially DoSing an API server with a request that is time consuming to fulfill. GraphQL allows you to set complexity limits on documents received from the client to help prevent this, and our scanner makes sure your API server has a reasonable complexity limit set. When not auditing high-complexity queries, we make sure the documents we generate balance simplicity with API coverage.
Our support for GraphQL is only beginning; please stay tuned for more developments! If you’re interested in joining the beta of our GraphQL Scanning, please drop us a line.
TechNet Cyber 2019: Here we come!
By- May 14, 2019
Ready for another round of “Do Good With Your Data?” Join us at TechNet Cyber in Baltimore this weekend! In the vein of our community partner for the year, HSSV, we’ve deviated a bit to help the veterans folks at TechNet support.
We believe your data should be used for something good, and want to encourage you to think about what your data is worth.
To help us with our mission, Tinfoil Security has partnered with Freedom Service Dogs of America for Technet 2019. FSDA provides assistance dogs for veterans returning from Iraq and Afghanistan. Each dog provides support as veterans re-enter civilian life, including companionship for disabled veterans. For every badge we scan we will make a donation to FSDA. The more scans we get the more money goes toward this incredible foundation and the more of an impact we can make for those who gave so much to protect us. Help us help veterans.
Additionally, we’re excited to announce that our founders have both won the AFCEA 40 under 40 award this year! We are humbled and honored to be selected. We have a booth at TechNet Cyber in Baltimore this year, and would love to chat. Feel free to stop by booth 2143 for a conversation and a tinfoil hat.
TechNet Cyber - a way to connect government and military leaders with industry leaders - gives us the opportunity to further educate business and government leaders about the necessity of a security-first mindset. We believe that the internet and its users need more security, and we strive to make that goal a reality, one company at a time. Even if just a few companies or government agencies see the advantages of securing web applications and APIs in their DevOps pipeline, then our mission at TechNet Cyber was a success.
Most post-breach plans by IT leaders have a common flaw: the plans are reactive, rather than proactive. Security should be the first thing you develop a mindset for. A good guide to start with is the SANS Top 5, which especially if a security team is a new addition to your organization.
A large number of breaches are the direct result of something easily preventable. This is the most frustrating aspect of most breaches: something easily preventable causing damage to the consumers and the companies. Human error is too high for the demanding nature of today’s SDLC; that’s where automated tools come in. A solid security team augmented with a great web application and API scanner works to ensure that you are doing everything they can to be secure.
We are excited to attend TechNet Cyber 2019. We’ll be at booth 2143, so make sure you stop by and have a conversation about how our web application and API scanners can help your company be secure and efficient. If nothing else, stop by the booth to see if you can do more push-ups than Derek. ;)