Pluses and Pitfalls of Repo.stream
By- October 24, 2018
Scenario: you are working on a phoenix app that has seen a good deal of use and need to do some transformations of some tables encompassing an exceptionally large number of rows and their relations. Obviously, some amount of considerations for performance are necessary; if you can avoid loading an entire table into memory in order to achieve this, that would be ideal right? Enter Ecto.Repo.stream, turn that giant list into a lazily evaluated enumerable and load rows as needed. Job done, right? Well, depends.
The good news is you will definitely address the issue of memory use, however it does come at the cost of time, which can increase greatly if, for instance, you need to access a number of rows in an associated table for every row you are referencing. For instance:
This might seem like a good idea, if there are large number Bar’s for every Foo entry, but since the stream must be inside a transaction, you have one connection you’re working with to finish enumerating over the stream. This can be adjusted with the :timeout option on Repo.stream which can be relaxed from its default at 15000 milliseconds all the way to :infinity, but if your streaming changes rely on a flaky connection or some other piece of code, you could run into an issue again on that side. Safer to avoid nesting streams if possible, or to find a different way of chunking data.
If memory is a more pressing constraint than time, Repo.stream is a pretty convenient way to manage how much is loaded into memory at a given time. Just remember to choose an appropriate timeout value before you start.
Dockerfiles for Phoenix
By- October 16, 2018
While a lot of our older software was written in Ruby and Ruby on Rails we've been expanding the past couple of years into Elixir and Phoenix (Elixir's batteries-included web framework). Docker remains our preferred mechanism to deliver our software in a well-tested and repeatable format. I'd like to share with you a simple Dockerfile for Phoenix, specifically supporting Phoenix >= 1.4 which uses webpack instead of brunch.
We're using a multi-stage to keep the image slim and nimble. In the first step we get a copy of the Elixir dependencies, mainly for the phoenix and phoenix_html Node modules that are co-located in the Elixir Hex packages. The second build file lets us build and emit the finalized assets with webpack. In the final step we're back to an Elixir base image and we can copy everything over, merge the assets, and set the command for starting the server.
This produces an image around 100MB. Compared to our Slim Dockerfiles for Rails that's a space savings of 50%! We've seen some pure-Elixir applications even smaller when you use Distillery to create an optimized release, which we'd recommend for heavier production use as it gives you a lot more control.
I hope you enjoyed seeing an example of building a slim Docker image for Phoenix. If you'd like to help build next-generation security software and work with Elixir and Phoenix daily check out our open job positions!
By- October 02, 2018
Our team is like a small family. We try to encourage everyone to get to know everyone else. Unfortunately, as with any business, sometimes sales, marketing, engineering, and all the pieces in between don’t collaborate as much as they could. As founders, we appreciate every employee, but sometimes the new engineer doesn’t really know what the new sales person is doing, nor how they can help.
Enter: the awesome possum. For a while, I’ve been thinking of ways to encourage people on our team to collaborate with others they normally might not collaborate with, and how to encourage people to show appreciation for the help they receive. That’s then I came up with the idea of the awesome possum. The awesome possum is a little stuffed possum. If you have the awesome possum, you pass it along to the next person who does something awesome (out of their job scope or truly exemplifying a Tinfoil value) for you or someone else. They, in turn, pass it along. There’s no time limit for holding the possum, so you could have it for a minute or a month.
When I first showed up with the awesome possum everybody laughed. Now, he’s dressed more dapperly than I am and always in a new Tinfoil hat. It might be a small thing, but it’s just one way for each of us to show appreciation across the team. There’s a lot of joy when the possum ends up at the other end of the office, circulating amongst a new team. A little out-of-the-box thinking goes a long way.
What great hacks have you implemented on your team to encourage collaboration or appreciation? I’d love to hear them!
How to Choose a Web Application Scanner: DAST, SAST, RASP, IAST, HAST...Holy SaaS!
By- September 25, 2018
Five penetration testers and five developers walk into a bar. Not just any bar, but a bar that serves up the finest automated web application security scanning cocktails you can find. The bartender asks the first penetration tester, “What’re you drinking?” NoName Hacker responds that he would like the SAST. The other pen-testers scoff at this. One claims that his skills for manual testing outweigh any tools at this bar. Apparently his home-made scanning cocktails made with age-old moonshine are the best. Another screams that SAST is inefficient since it requires customizing the scanner to the application’s stack. Now it’s time for a developer to step up and order. The crowd is screaming behind her, so what should she do? What would you do?
Imagine what the inventory of this bar would look like for a minute. It’s probably similar to a World of Beer with 550+ beers on tap, except instead of Hefeweizen, Kolsch, IPAs, and Stouts, we have DAST, SAST, RASP, IAST, and HAST tools. Oh, and I almost forgot, the huge list of open source tools that exist. Making a decision is a nightmare. Everyone has a different opinion, and what works for NoName Hacker might not work for the Developer Code Queen.
So where should you start?
It’s best to decide whether or not automating Web Vulnerability Scanning is important to your organization. As with many business functions, automation saves time and money. Your solution needs to increase the efficiency of your organization’s Software Development Life Cycle (SDLC). But, since you are reading this, you have probably already decided that you can save money and time, which means more money, by automating your scanning and avoiding continuous manual application penetration tests.
Just to pound this home, imagine that your application contains merely 50 different entry points and you want your security engineer to test for 10 different vulnerabilities that have 10 variants each, each week. That’s a total of 100 tests for each of the 50 entry points. If a single test takes 3 minutes, your security engineer will be testing your application for 52 hours per week every work week of the year.
It’s time to choose an automated solution.
What should you look for and where should you start? There are a few areas to focus on:
You are busy and you need a solution that implements easily. It should be simple to deploy, accurate, secure, and give usable and actionable information. A tool that is complicated and timely to learn will waste more time. If you spend an extensive period training an engineer on a complicated tool that you spent a ton of money on, you will have to repeat that process when, not if, that engineer eventually moves on to another project or to a different company.
Determine what stack you are running. If you decide on a SAST, RASP/IAST, or HAST solution, there will be several steps in the implementation process that require you to customize the scanner to your stack. Are you a modern organization running several apps on different stacks? Perhaps this isn’t the best solution for you.
A quick example: Elixir has boomed in popularity since it first appeared on the scene in 2011. The scalability and speed of Elixir and the Phoenix Framework make it popular for building web applications, APIs, and the like. Or maybe you are using Go. Your static code analyzer would have to continuously update to your stack, and you will have a tough time finding a tool that supports this modern language with the granular accuracy you need. RASP and IAST tools also require you to install a dependency on every single web server that you are running, adding infrastructure pain. Because of this and the stack-specific nature, it adds latency.
Some tools will openly state that they have low CPU impact, say, less than 4%. FOUR PERCENT?! That’s huge latency if you are someone who cares about latency like a bank. That type of latency is simply unacceptable.
Imagine a world where you have a vulnerability scanner that not only automates the scanning for vulnerabilities, but creates a ticket in your bug tracking tool containing vulnerability information. Once developers verify and fix the vulnerability, they kick off a rescan using the API. The scanner identifies that the vulnerability is indeed fixed, and automatically closes out the ticket. Months later, the team builds an update and runs a scan which finds the same vulnerability has reappeared. The process starts again with automatically creating the ticket, but instead of creating a new ticket, it reopens the past vulnerability tickets containing all of the history.
Does it sound amazing? Great. Because it’s not just a dream, it is reality.
Your scanning solution MUST integrate and enable your CI/CD processes. Look for something that connects to Jira and Jenkins (or any other ticketing and build tools you use) and automates the tasks of creating tickets, provides developers the details needed to verify that the vulnerability is not a false positive, and closes out the tickets once a vulnerability is fixed.
Can the tool scan staging and live servers?
Are all of the features and data available through the API?
Does the scanner output results in XML or JSON?
Scanners that iterate over the application’s source code, such as SAST and elements of RASP/IAST do not interact with the application like a hacker would. SAST does not run against an actively running application and only looks for things that look like a vulnerability, but which might not actually result in a run-time vulnerability. False positive alert!
You know your processes better than anyone, and you know what your applications look like. Find a solution that makes your developers’ lives easier, not one that overloads them with a high number of false positives.
Your tool should be easy to scale with your organization. Perhaps the first initiative is to scan the top 10 most critical external facing applications. But the long-term vision is to integrate scanning into the development of all 50 of your applications. The implementation and integration considerations above apply to each and every application you manage. The easier the training and deployment for the first application, the more time you will save in integrating the solution into your entire organization.
DAST tools do not require customization for scanning your stack. They interact with your website and find vulnerabilities that actually exist. At Tinfoil, we built our scanner with the goal of creating an automated solution that scans your application like a real hacker. It is simple to get up and running, and you don’t need to spend weeks training on the functionality. You can scan 50 different applications running Elixir, ASP.net, PHP, Angular, React, Ruby, Node.js, Go, and any other language your heart desires. The setup will be the same, and you will save many many headaches.
Now it’s time for you to do some work.
Before getting free trials and evaluating everything on the market, prepare your team so that you increase efficiency. High speed, low drag… am I right?
- Take an inventory of your web applications. Identify the most critical to scan first, and consider the implementation issues we talked about earlier.
- Look into the tools that you use in development. Make sure that the solutions you test integrate easily into the workflow you already have in place.
- Decide what is most important to your organization. Do you want the cheapest solution to check the box? Do you want the most expensive solution that comes with all of the doo-dads and frillies that make your security engineers heart’s race? Do you want to increase efficiency through high quality integration and automation?
Keep in mind that not every tool is built equal. As we have already seen, the process is different for each classification of a tool. But even more important, each tool in that category was built to satisfy a need using a different methodology.
Okay time for my shameless plug: Tinfoil Security’s DAST is, bar-none, the cream of the crop for DAST tools. It is premier in performance, simple to learn and deploy, and integrates seamlessly.
We also offer a patent-pending API scanner. There is no other API scanner on the market that truly interacts with your API like a hacker would, finding vulnerabilities and scanning for best practices. To learn more about this, see how to scan APIs the right way.
Are you interested in seeing a demo of our web application scanner or API scanner? You can set up a demo here. Use that link also if you need tips on good hair products, or to hear the rest of the story with NoName Hacker and Developer Code Queen.
Work away: breaking out of office walls
By- September 20, 2018
At Tinfoil, we often strive to follow in the footsteps of companies we look up to. One event we’ve taken from some of our favorites (Stripe and Baydin) is our annual work away. Work away is a trip away from the (SF)Bay Area and tends to be 70% work and 30% play. We work on unique projects that we’ve wanted to experiment with, but that may not have an immediately obvious benefit for our customers. It’s a great time for R&D and learning.
Apart from work, each work away has fun events, bringing us closer as a team. Here is a quick overview of how we included some fun during the time away: Beach time, island excursions, lobster bakes, brewery tours, bunker exploration, Seafood Festival visits, parasailing, “spontaneous” pool breaks (either in the cabin’s pool, or with the pool table.. I have learned I was ambiguous when scheduling!), hiking, escape the room games, cooking dinner in groups, playing board games, and treating the team out to fancy dinners.
This week marks our third annual work away. As our team has grown since our first work away, and we are now comprised of much more than just engineers,work away has definitely evolved. Each evolution has brought different and valuable learnings for each team member's personal and professional growth. Though having a full team at work away is hard once you start growing to a larger team size (16+), we always come out stronger because of it.
Our first work away was amazing. We headed to the beach in New Hampshire. Luckily there were enough nooks and crannies to shove extra beds and cots for at least 12 people. It was incredibly affordable, and an eye opening experience for me as a CEO. We had some employees drastically grow into the team during this time away. One employee showed that, though he was from a small suburban town, he could speak enough fluent Mandarin to get us through an escape the room game in half the time. It was the first time some of our employees had ever seen an ocean. We all stayed up late playing board games and building stronger bonds, supporting the familial culture we embrace.
Our first work away project was really just for engineers, as that’s what we all really were. We built a deployment manager (which we lovingly call Arceus internally), and it has saved us hundreds of hours of time. All of our engineers are polyglots, picking the right language and tools to get the job done, meaning we have a lot of varied projects, each with its own build tooling and deployment tooling. This gave us a singular place to track the status of and release new deployments, allowing us to release things concurrently. Focusing on our tenet of automation, it brought our release engineering efforts from a hefty manual process to only requiring a few hours a week of a single engineer’s time. As with any new trial, we did have efforts that failed, but all of our failures were useful learnings.
Our second work away was the first time we had non-technical people joining the team and wasn’t nearly as successful as our first. We repeated the venue (though we changed up the fun things, other than the escape the room game... that’s a permanent event now!), which still worked out great.
Our engineers decided our second work away would be spent focusing on learning new things, not specifically focusing on a single project. Our goal was to bake off different frontend technologies to learn their pros and cons. It ended up being marginally useful, but not as useful as the first. We’ve learned that, for our team, work away projects need to be planned, at least at a high level, so we know what we’re looking to get out of it. We did learn the lay of the land with specific technologies we had interest in, but missed the concrete finished product.
Our marketing lead joined us one week before work away. She had a much more successful work away than the engineering team. She was immediately immersed in the team (which usually takes a few months of walks and lunches) and came up with a concrete set of goals for the upcoming year. She started to explore new designs for our collateral and brand, took some team headshots, purchased important equipment, and outlined goals for PR for Tinfoil’s founders. She got immediate face to face time anytime she had questions and her learning curve was drastically less steep than it otherwise would have been. We’re a team that cares about teaching, and anybody wanting a break was happy to teach her something new!
This brings us to this week’s work away. We arrived 2 nights ago, and are just getting into the swing of things. I hope this work away is the best yet. We’re a larger team. Our engineers are working on a fun project to learn more frontend development as a group. They’re building a platform to enforce our curiosity value and collect team members’ learnings throughout the day. Our marketing team is creating a new yearly plan from this year’s learnings. They’re adding new OKRs, scoping out new technology we’ll need (or need to build) for new marketing efforts, and are solidifying a highly technical white paper for our customers. Our HR and administrative folks are finishing up a new version of our handbook and then helping to support the marketing team. Sales and support are collaborating on automated tools to increase communication between the two teams. And our new government sales team is taking a week to think outside of the box and build tooling to assist in automating government sales and business intelligence, embodying Tinfoil’s hacking value.
We’re sitting in Tahoe to shorten this year’s travel time. We’ve taken a page out of our retreat book and create cooking teams for dinner, but so far we’ve got a wonderful, relaxed dynamic of team members pairing and collaborating amongst different teams. It’s exactly how I want my company to work - this is how we make the most progress, get new ideas, and don’t forget about edge cases.
Work aways don’t have to be expensive. They don’t have to happen every quarter. They do seem to have an effect on our team. I love to encourage other startups and organizations to consider work aways for their teams as an alternative way to bring their team together and maybe get a new project off the ground that could have a large impact on their productivity or customers. Hopefully this helps you see a different perspective.
Feel free to follow our Facebook or Twitter as we post updates during this year’s work away. As with anything I post, I welcome any questions and comments - I’m always trying to take feedback to keep growing.