Get up to $20,000 for Discovering ChatGPT Security Flaws

ChatGPT may be the hottest technology right now , but it’s not immune to the problems any software faces. Bugs and crashes affect any running code, but the stakes are higher than a bad user experience: the wrong bug could allow attackers to compromise the security of OpenAI users, which could be very dangerous considering the company hit 100 million active users in January alone . OpenAI wants to do something about it and will pay you up to $20,000 for your help .

OpenAI Bug Bounty brings big payouts

On Tuesday, April 11, OpenAI announced its new Bug Bounty program , inviting “security researchers, ethical hackers, and technology enthusiasts” to examine its products (including ChatGPT) for “security vulnerabilities, bugs, or weaknesses.” If you happen to find such a flaw, OpenAI will reward you in cash. Payouts vary depending on the severity of the problem you find, ranging from $200 for “low severity” detections to $20,000 for “exceptional discoveries”.

Bug bounty programs are actually quite common . Companies in the market offer them, outsourcing the work of finding bugs to anyone who wants them. It’s a bit like beta testing an app: sure, you can have developers look for bugs and crashes, but by relying on a limited set of users, you increase the chances of missing important issues.

With bug bounty, the stakes are even higher because companies are most interested in finding bugs that make their software — and therefore their users — vulnerable to security threats.

How to register for the OpenAI Bug Bounty program

The OpenAI Bug Bounty program is run in partnership with Bugcrowd , an organization that helps find bugs through crowdsourcing. You can register for the program through the official Bugcrowd website , where at the time of writing, 24 vulnerabilities have already received rewards, and the average payout was $983.33.

However, OpenAI wants to be clear that model security issues are not relevant to this program. If, while testing one of the OpenAI products, you find that the model is not behaving as expected, you should complete the Model Behavior Feedback form instead of participating in the Bug Bounty program. For example, you shouldn’t try to claim a reward if the model tells you how to do something bad, or if it writes malicious code for you. In addition, hallucinations are also outside the scope of the bounty.

OpenAI’s Bugcrowd page has a long list of in-scope issues in addition to an even longer list of out-of-scope issues. Be sure to read the rules carefully before submitting bug reports.

More…

Leave a Reply