Work away: breaking out of office walls

At Tinfoil, we often strive to follow in the footsteps of companies we look up to. One event we’ve taken from some of our favorites (Stripe and Baydin) is our annual work away. Work away is a trip away from the (SF)Bay Area and tends to be 70% work and 30% play. We work on unique projects that we’ve wanted to experiment with, but that may not have an immediately obvious benefit for our customers. It’s a great time for R&D and learning.


Apart from work, each work away has fun events, bringing us closer as a team. Here is a quick overview of how we included some fun during the time away: Beach time, island excursions, lobster bakes, brewery tours, bunker exploration, Seafood Festival visits, parasailing, “spontaneous” pool breaks (either in the cabin’s pool, or with the pool table.. I have learned I was ambiguous when scheduling!), hiking, escape the room games, cooking dinner in groups, playing board games, and treating the team out to fancy dinners.


Tinfoil work away 2017

This week marks our third annual work away. As our team has grown since our first work away, and we are now comprised of much more than just engineers,work away has definitely evolved. Each evolution has brought different and valuable learnings for each team member's personal and professional growth. Though having a full team at work away is hard once you start growing to a larger team size (16+), we always come out stronger because of it.


Our first work away was amazing. We headed to the beach in New Hampshire. Luckily there were enough nooks and crannies to shove extra beds and cots for at least 12 people. It was incredibly affordable, and an eye opening experience for me as a CEO. We had some employees drastically grow into the team during this time away. One employee showed that, though he was from a small suburban town, he could speak enough fluent Mandarin to get us through an escape the room game in half the time. It was the first time some of our employees had ever seen an ocean. We all stayed up late playing board games and building stronger bonds, supporting the familial culture we embrace.


Our first work away project was really just for engineers, as that’s what we all really were. We built a deployment manager (which we lovingly call Arceus internally), and it has saved us hundreds of hours of time. All of our engineers are polyglots, picking the right language and tools to get the job done, meaning we have a lot of varied projects, each with its own build tooling and deployment tooling. This gave us a singular place to track the status of and release new deployments, allowing us to release things concurrently. Focusing on our tenet of automation, it brought our release engineering efforts from a hefty manual process to only requiring a few hours a week of a single engineer’s time. As with any new trial, we did have efforts that failed, but all of our failures were useful learnings.


Our second work away was the first time we had non-technical people joining the team and wasn’t nearly as successful as our first. We repeated the venue (though we changed up the fun things, other than the escape the room game... that’s a permanent event now!), which still worked out great.


Our engineers decided our second work away would be spent focusing on learning new things, not specifically focusing on a single project. Our goal was to bake off different frontend technologies to learn their pros and cons. It ended up being marginally useful, but not as useful as the first. We’ve learned that, for our team, work away projects need to be planned, at least at a high level, so we know what we’re looking to get out of it. We did learn the lay of the land with specific technologies we had interest in, but missed the concrete finished product.


Our marketing lead joined us one week before work away. She had a much more successful work away than the engineering team. She was immediately immersed in the team (which usually takes a few months of walks and lunches) and came up with a concrete set of goals for the upcoming year. She started to explore new designs for our collateral and brand, took some team headshots, purchased important equipment, and outlined goals for PR for Tinfoil’s founders. She got immediate face to face time anytime she had questions and her learning curve was drastically less steep than it otherwise would have been. We’re a team that cares about teaching, and anybody wanting a break was happy to teach her something new!


This brings us to this week’s work away. We arrived 2 nights ago, and are just getting into the swing of things. I hope this work away is the best yet. We’re a larger team. Our engineers are working on a fun project to learn more frontend development as a group. They’re building a platform to enforce our curiosity value and collect team members’ learnings throughout the day. Our marketing team is creating a new yearly plan from this year’s learnings. They’re adding new OKRs, scoping out new technology we’ll need (or need to build) for new marketing efforts, and are solidifying a highly technical white paper for our customers. Our HR and administrative folks are finishing up a new version of our handbook and then helping to support the marketing team. Sales and support are collaborating on automated tools to increase communication between the two teams. And our new government sales team is taking a week to think outside of the box and build tooling to assist in automating government sales and business intelligence, embodying Tinfoil’s hacking value.


Team work away 2018

We’re sitting in Tahoe to shorten this year’s travel time. We’ve taken a page out of our retreat book and create cooking teams for dinner, but so far we’ve got a wonderful, relaxed dynamic of team members pairing and collaborating amongst different teams. It’s exactly how I want my company to work - this is how we make the most progress, get new ideas, and don’t forget about edge cases.


Work aways don’t have to be expensive. They don’t have to happen every quarter. They do seem to have an effect on our team. I love to encourage other startups and organizations to consider work aways for their teams as an alternative way to bring their team together and maybe get a new project off the ground that could have a large impact on their productivity or customers. Hopefully this helps you see a different perspective.


Feel free to follow our Facebook or Twitter as we post updates during this year’s work away. As with anything I post, I welcome any questions and comments - I’m always trying to take feedback to keep growing.


Ainsley Braun

Ainsley Braun is the co-founder and CEO of Tinfoil Security. She's consistently looking for interesting, innovative ways to improve the way security is currently implemented. She spends a lot of her time thinking about the usability and pain points of security, and loves talking with Tinfoil's users. She also loves rowing, flying kites, and paragliding.


Protect Yourself from Magecart Using Subresource Integrity

Magecart has become a big issue in web application security the past few days. They have skimmed credit card information from British Airways and more recently have been injecting into JavaScript assets served by Feedify. Modern websites use many resources to provide the rich experiences customers have come to expect. However, if you don’t directly host or control those resources you are vulnerable to a provider getting attacked and having malicious code injected into the assets you were previously consuming.

We’ve previously written about Subresource Integrity but I’d like to reiterate the benefits and show how to get started securing your assets. Subresource integrity is an official browser feature that allows websites to ensure the integrity of resources loaded from external sources, such as Content Delivery Networks (CDNs). This is a common technique used by websites to speed up the loading of assets, including common JavaScript libraries like jQuery, Google Analytics or Segment.io’s analytics.js.

Since these JavaScript libraries are uncontrolled external code that is being run in the context of your web application, their content must be audited and trusted. Subresource integrity serves to mitigate this issue by ensuring that all loaded resources contain the exact content expected by the website. This is done through the use of a cryptographic digest or hash, computed on all fetched resources, that is then compared against a digest that is served with your page. This provides the browser the capability of detecting resources that have been tampered with, allowing it the opportunity to abort the loading of the resources before any malicious code is executed.

Protecting a resource is as easy as adding the "integrity" attribute to an asset’s HTML tag:

<script src="https://example.com/v1.0.0/include.js"
        integrity="sha256-Rj/9XDU7F6pNSX8yBddiCIIS+XKDTtdq0//No0MH0AE="
        crossorigin="anonymous">
</script>

Since we previously wrote about Subresource Integrity support has grown from browser vendors and all modern desktop browsers support it. We highly recommend this solution but it comes with a caveat - if the external entity changes the JavaScript for a bug fix and doesn’t notify you then your integrity hash won’t match. This is by design but you may want to look for a mechanism to link to a specific version of a library. You may also need to evaluate your risks on a per-page basis. Many of the popular web frameworks provide libraries that make it easy to enable subresource integrity on your assets, and further instructions on making use of the technology are available on the Mozilla Developer Network. The SRI Hash tool provides an easy way to calculate integrity hashes for your assets.

I hope you are inspired to integrate Subresource Integrity into your website. All Tinfoil Security scans flag external resources that are not protected by subresource integrity. Give it a try by signing up for our 30-day free trial.


Ben Sedat

Ben Sedat is the Engineering Wizard of Tinfoil Security. He's a bit of a blend between a traditional software engineer (builder) and security engineer (breaker). He spends a lot of time thinking about security: both detection as well as creating solutions for the security issues that exist in software and the internet. He also plays lots of video games. Lots.


A Quick Guide to the Complex: Ecto.Multi

Ecto.Multi, a data structure added in Ecto 2.0, is an extremely useful tool for creating and executing complex, atomic transactions. This very brief guide will cover a few of the most useful methods associated with Ecto.Multi and when to use them.

Common Uses

insert(multi, name, changeset_or_struct, opts \\ [])
The most straightforward way to use Ecto.Multi is to chain individual changesets together. insert, update, and delete functions are available and all behave as you might expect them to, with all operations are executed in the order in which they are added. You can imagine a transaction dealing with a user signing up via an invitation email might look something like this:

Ecto.Multi.new
|> Ecto.Multi.insert(:user, user_changeset)
|> Ecto.Multi.delete(:invitation, invitation)
|> Repo.transaction()

What might have been be two separate database transactions has been condensed into a single, atomic transaction, with Ecto.Multi handling rollbacks when necessary. But what about when one operation relies on the results of a previous one?


Run

run(multi, name, fun)

Run is an extremely versatile method that adds a function to the Ecto.Multi transaction.  

The function must return a tuple in the for of {:ok, value} or {:error, value}, and, 

importantly, is passed a map of the operations processed so far in the Multi struct. This means we can key into the changes created by previous operations, and use the those values while executing any code we like. The transaction creating and sending the invitation mentioned above could look something like this:

Ecto.Multi.new()

|> Ecto.Multi.insert(:invitation, invitation_changeset)

  |> Ecto.Multi.run(:send_invite_email, fn multi_map ->

  send _invite_email(multi_map)

end)

    |> Repo.transaction()

Append

append(lhs, rhs)
append is a handy way to combine two Ecto.Multi structs into a single atomic transaction. One potential pattern is to compose multiple functions that return Ecto.Multi structs, and combine them as needed. As noted above, operations are executed in order, so if you want the appended struct to be executed first, you’ll want to use prepend instead.

def create_and_send_invite(invitation_changeset)

Ecto.Multi.new()

|> Ecto.Multi.insert(:invitation, invitation_changeset)

  |> Ecto.Multi.run(:send_invite_email, fn multi_map ->

  send _invite_email(multi_map)

end)

end


def clear_expired_invites()

    Ecto.Multi.new()

        |> Ecto.Multi.run(:clear_invites, fn () -> 

          clear_invites()

        end)

end


def invite_user(inviation_changeset)

invite_multi = create_and_send_invite(invitation_changeset)

clear_expired = clear_expired_invites()


Ecto.Multi.new()

|> Ecto.Multi.append(invite_multi)

|> Ecto.Multi.append(clear_expired)

|> Repo.transaction()

end

Merge

merge(multi, fun)

Similar to run, merge will execute a given function and any arbitrary code associated with that function. Unlike run, this function is expected to return a multi struct whose operations will be merged into the multi struct passed to it as the first parameter.

def add_user_to_organization(multi_map, organization)

    user = multi_map[:user]


    Ecto.Multi.new()

        |> Ecto.Multi.run(:organization, organization.add_user(organization, user)

end


def create_collaborator(user_changeset, organization)

Ecto.Multi.new()

|> Ecto.Multi.insert(:user, user_changeset)

|> Ecto.Multi.merge(:add_to_org, fn (multi_map, organization) -> 

          add_user_to_organization(multi_map, organization)

        end)

|> Repo.transaction()

end


Peter Ludlum

Peter is a Software Engineering Intern at Tinfoil Security. A recent graduate of App Academy, he enjoys nothing more than bringing beautiful (and functional) web pages to life. When he isn't coding, Peter is usually lost in a book or strumming out a new tune on the ukulele.


Useful Flags for Chromedriver

Chromedriver is a powerful and flexible tool for remotely controlling a browser while testing your site for use interactions or as part of a crawler or other automated application. It has a lot of configuration options in the form of flags passed in on load which allow you a great deal of control over chrome or chromium’s behavior. Unfortunately, there are a lot of flags and navigating them can be a bit of a slog, so I wanted to outline a few flags which I’ve used in my own chromedriver adventures and found generally useful.

--window-size=[WIDTH],[HEIGHT]

--start-maximized

As you might expect, these modify the size of the browser window when chromedriver starts up. Trying to see what the browser is interacting with while it’s running? Toss in a --start-maximized for visibility. Trying to watch debug traffic or spec results in a terminal while chromedriver is running, or perhaps multiple browser windows running together? Set --window-size to a small value so you can keep things in sight (and, if rendering is wholly unnecessary, there’s always --headless).

--proxy-server=[PROXY_ADDRESS]

--ignore-certificate-errors

If you want to use a proxy with your browser, that can be passed in as a flag to the browser. This does, however, result in a lot of SSL errors if you’re setting up a man-in-the-middle proxy (to analyze traffic, for instance). In normal browser operation, this throws up a blocking page to make sure the user knows their secured traffic is passing through a third party, but that can be an inconvenience if your automated browser needs to access https resources. Using --ignore-certificate-errors gets around this issue handily, allowing the browser to continue without that roadblock (though the page will still be marked insecure on the address bar).

--incognito

Starts the browser in incognito mode. This prevents the cookie and local storage of browser windows from carrying over into one another, which is useful if you have tests running in parallel, for instance.

--auto-open-devtools-for-tabs

The devtools pane opens at start up. This is useful for checking for  errors in the javascript console, and is extra, extra useful if you’re chasing down problems with network requests in testing, as it will start recording immediately, rather than needing to grab the window, open devtools and then reload the page (possibly interrupting your own tests or other app activities).

--gaia-url

This is a somewhat unusual one. Google’s account managing functions in chrome and chromium are built as gaia apps, and will try to ping the Google servers to sync account information. This can leads to some spurious open network connections if your automated application is monitoring those to check page loads. Routing this connection to a random external url will cause chrome to fail to boot, but if you really want to suppress this connection, you can set this value to a chrome:// url with a fake address. This will cause the browser to talk to itself when trying to sync, which will stop that external connection (but also break that piece of browser functionality).


These are just a small, small subset of available flags and options. A fairly complete list is available at this site for all your automation needs. Hopefully some of these will prove useful on your next browser automation project. Safe building!


Alex Bullen

Alex is Tinfoil Security's Top-Shelf Programmer (and fetcher of things from high shelves). A former psychology wonk and recent App Academy grad, Alex endeavors to treat every challenge as an opportunity to improve his code-fu. When not busily building blocks of precisely put code, you can find him reading fantasy novels or practicing kung fu.


Disclosing Vulnerabilities: How to Avoid Becoming The Next Data Breach Headline!

When it comes to disclosing vulnerabilities to enterprise companies, they seem to prefer the “hear, speak, see no evil” strategy, and the conversation often ends up looks something like this:

Me: Hey, we found a vulnerability on your site, wanted to let you know so you can fix it.
You: Cool thanks for letting us know, but we’re going to try to sue you for telling us.
Me: ???

You can see how this would discourage those who find vulnerabilities from disclosing them to enterprise companies, however, if we take a look at some of the major data breaches that have happened over the last few years, you can see how there is a need to encourage transparency in order to address vulnerabilities, rather than ignoring them or even threatening those who bring them to light. 

For example, you may have heard about Panera Bread leaking millions of customers’ data, recently. It’s a story that is depressingly familiar: the initial report of the vulnerability was dismissed as a scam, then it was ignored, then “fixed” by a token patch that did little to actually prevent the data from being exposed. For at least eight months, nearly 37 million Panera Bread customers had their data exposed for anyone to collect.

Why does this happen? Why do companies have such a visceral reaction when this kind of news is presented to them? We have seen similar reactions when we have disclosed vulnerabilities to large enterprise companies (some very well known)! After one particular disclosure, we even got threatened with a lawsuit for letting a company know they had a vulnerability, even though we helped them by providing a suggestion on how they can fix it.

Let’s be clear here, even if a company is not a current customer, we reach out to them immediately after finding a vulnerability to let them know, so they can get the issue fixed. All too often, however, we get treated with hostility and anger: “How dare you tell us we have a problem!”.

This is a very dangerous mindset and culture that seems to exist at the leadership level of some of these companies.

So how can we change this?

It has to start at the top. CISOs need to adopt a better mindset and culture around handling incoming help. Yes, some people may use it as a sales tactic, but it does not negate the fact that the vulnerability is there. All disclosures should be taken seriously and, dare we say, welcomed.

Some companies are great; they even have bug bounty programs that offer rewards to an individual for finding issues for them. Here at Tinfoil, we see it as our obligation to be good stewards of the community and always share any potential threats regardless of whether the target is a current customer, and with no obligation to become one. We just want you to have safe and secure applications! Why? Because chances are, you hold the personal data of some of our team members and customers.

Tinfoil Security has decided to start a campaign every time there is an avoidable breach with the #UseTinfoil that will be applied when we know a tool like ours could’ve been used to avoid your data from falling into the wrong hands!


Peter Ludlum

Peter is a Software Engineering Intern at Tinfoil Security. A recent graduate of App Academy, he enjoys nothing more than bringing beautiful (and functional) web pages to life. When he isn't coding, Peter is usually lost in a book or strumming out a new tune on the ukulele.