Advertisement

Can bug bounty programs solve the cybersecurity workforce shortage?

Bugcrowd CEO Casey Ellis talks with FedScoop about the burgeoning bounty business: "Easier access to more talent to solve problems."

As cybersecurity turns to top of mind for organizations across the country, more are turning to bug bounty programs for a cheap and effective way to find vulnerabilities that lurk in their systems. From car manufacturers and financial services firms to the Department of Defense, the idea of paying out bounties for bug fixes is an idea that’s catching fire. 

FedScoop recently sat down with Bugcrowd CEO Casey Ellis to get insight into how his company is handling the growing trend, where he sees the market moving and how he is making sure the researchers attracted by his company’s programs don’t break the law. 

___________________________________________________________________________

FedScoop: These bug bounty programs are growing in popularity. Why do you think that has come about? 
 
Casey Ellis
: What we’ve seen through our company and through our platform, as well as on other platforms and just people doing it themselves, is an acceleration in launches over the past three to four years. The other thing is the spread from technology companies and traditionally early-adopting organizations to the broader market.

Advertisement

We recently launched programs with Fiat Chrysler and MasterCard. You are starting to see these organizations that you wouldn’t necessarily say are adopting the Facebook or Google model. The core issue that people are realizing is that they are more vulnerable than they are being told. This is a really effective way to get better results. More issues, more creative impact, more eyes on targets and, ultimately, reduced risk.
 
The other thing is that there are just not enough people around, so there’s this sense of — especially out here on the East Coast — we’re having a lot of conversations where the backdrop is just the resource shortage. The way a distributed resourcing model, like a bug bounty program, works is it gives people easier access to more talent to solve problems that they are actually unable to hire for.

FS: How have you seen the landscape change since the Pentagon ran its bug bounty program? 

CE: Hack the Pentagon as a pilot was fantastic because it proved two things: It proved that the Pentagon had vulnerabilities, and it proved that the crowd was able to find them. As a pilot, I was really happy that all went down. We’ve had conversations with all sorts of government departments, DOD included. What we’re also seeing is governments in other countries reaching out and saying, “We didn’t think governments were going to do it for a little while yet, but it seems that they’re making a run at it, so maybe it’s time for us to start talking about it, too.”

FS: One of the alleged participants in the Hack the Pentagon program was arrested and charged with stealing data from the home email accounts of top U.S. security officials. Sometimes, in these bug bounty programs, people want to stay anonymous. Yet, companies are naturally going to worry about opening their networks to the masses. How do you balance the trust factor with each side? 

CE:  What we do is we use the public programs to attract and onboard people to the platform. Once they’re in there, there are four things that we assess: What kind of skills do they have, what kind of impact are they capable of, what kind of activity are we going to see from them, and the fourth … is trust.

Advertisement

The way that we assess trust over time really comes down to their behavior. We’re looking for their behavior on or off platform: How they interact with the programs that we run, how they interact with our team, and so on. There are a bunch of data points that we collect and, basically, feed into a scoring system. 

The goal is to figure out if [the community] has people we can trust for the clients who have trust requirements. As you’d imagine with 37,000 hackers, some of them understand that you have to wear a suit and tie to work, others don’t. The ones that don’t, they’re more than welcome to participate on the public programs and even some of the private ones where there’s a low level of trust. Once we start getting into more critical situations, we need to be able to tell the client that we’ve done our homework on who these people are and have a degree of certainty that they’re going to behave themselves.

Beyond that, we’ve got identity verification, background checking, and we’re starting to get into a position where we’re checking for clearances. Essentially, we’re mapping the community to make sure that we can deploy them appropriately to the types of things that are being asked of us from the customer.

FS: So the arrested hacker listed on his LinkedIn page that he was an employee of Bugcrowd and HackerOne. I want to talk about that designation a little bit. Do you welcome the public to consider themselves a researcher on your company’s behalf? 

CE: If you look on LinkedIn, we’ve got 600 employees. That’s clearly not true at this point. The researchers are doing the work. They’re either aspiring to be associated with what we’re doing, or they’ve built some sort of credibility on their profile page and they’re wanting LinkedIn to know about it.

Advertisement

In terms of encouraging it, frankly, we do. In [the arrested hacker’s] case, obviously, it’s been awkward, especially for HackerOne and for the Hack the Pentagon project. They asserted a level of trust to the people that they were bringing in. We checked out this guy and he’s only participated in our public programs — we haven’t invited him to anything private. He’s now very much blacklisted on the platform.

That’s one of those things where if [malicious hackers] are going to associate with us, there’s nothing we can really do to stop that. I think in the absence of having full control over it, it’s like, [list us as an employer], because if you’re proud of us then that’s a transitive pat on our back. That creates potential for situations like this, but we’re a crowdsourcing company. It gets a little messy sometimes.

FS: Can you give us a little more detail on how the Fiat Chrysler program came about? 

CE: For them, it’s very much about access to people that have automotive hacking skills, because over the past four years, the connected vehicle has become a 2 ton mobile phone. Basically, you’ve got an industry that’s used to evolving at the pace of automotive manufacturing. They’re pretty progressive, but you’re building a car — you can’t change things that quickly. Now, all of sudden it has to become more convenient at the speed of the internet, because that’s what their customers expect.

You’ve got this product team that’s used to moving at car speed, and then another one that now has to move at internet speed. Then, you’ve got to figure out how to draw a ring around the whole thing and make it safe, which is the part where this is like, “Whoa, that’s a little disconnected from our core safety things that we do with vehicles.”

Advertisement

I think Fiat Chrysler, once they were done taking a deep exhale moment after all the Jeep hack stuff went down, got pretty busy thinking about how they can engage that model proactively. That’s really when we got involved. I think the program itself is phenomenal. They’re great to work with. This whole idea of automotive cybersecurity, and seeing this move from websites and software to stuff now that’s critical to safety, that’s a whole new chapter and automotive is really spearheading that. 

FS: What other industries are going to follow?

CE:  Actually, it’s funny, I called it at Billington in Detroit that [medical device security] would go next, and then critical infrastructure would come after that. I think critical infrastructure definitions start to get pretty difficult with this stuff. Because what is critical infrastructure? Are we talking about a dam or are we talking about a building management system that runs an elevator? Both of them can kill people, but you’ve got corporate level responsibility for one, and state or federal for the other. We’ve had more interest from the medical device community. Critical infrastructure is starting to chat with us, but I would say that one’s still pretty early.

FS: So the bug bounty company space is pretty volatile right now. Your company and HackerOne seem to be the big ones on the block, with startups nipping at your heels. Do you get a sense of rivalry or is everyone in this space in it for the common good of protecting the internet? 

CE: It’s interesting, because I consider HackerOne “frenemies,” in the sense that we have a common goal of basically finding vulnerabilities and killing them off, and making the internet more resilient. Then the adjacent goal to that is keeping researchers out of jail, and giving them the opportunity to be productive and interact with the vendors. That part we share.

Advertisement

We get locked in mortal combat over accounts every now and then. We actually have very different models, so in terms of the value that we provide, it looks similar from the outside, but once you get under the hood, it’s very different. The way I look at it is, the opportunity is bigger than both of us combined. Ultimately, if we’re the only ones doing this, I’d be questioning whether or not I was crazy. There’s validation there in the competition. I think for the sake of a healthy marketplace, competition is actually a good thing.

This interview has been edited for length and clarity. 

Greg Otto

Written by Greg Otto

Greg Otto is Editor-in-Chief of CyberScoop, overseeing all editorial content for the website. Greg has led cybersecurity coverage that has won various awards, including accolades from the Society of Professional Journalists and the American Society of Business Publication Editors. Prior to joining Scoop News Group, Greg worked for the Washington Business Journal, U.S. News & World Report and WTOP Radio. He has a degree in broadcast journalism from Temple University.

Latest Podcasts