SafeType™ from Cyera Labs Supports Safe Generative AI Experimentation
As a developer, I love to experiment with new technology. I am also passionate about encouraging my teams to solve complex problems at the bleeding edge of innovation. Recently, the technology industry’s collective imagination has been captured by the promise (and potential negatives) of generative AI, with ChatGPT taking center stage in the discussions. Generative AI platforms represent tremendous potential to disrupt businesses.
Today, the Cyera Labs security research team released SafeType. This extension for Chrome and Edge browsers lets you know when your ChatGPT session includes sensitive data that you shouldn’t be sending to OpenAI, and enables you to anonymize the information. SafeType is an open source browser extension that bundles static classifiers for a number of commonly used data classes. SafeType is not connected to Cyera’s data security platform, does not share code with the platform, and does not interact with Cyera or our customers’ systems in any way. Cyera Labs’ goal is to help mitigate accidental disclosures while experimenting with the technology.
There is no doubt that ChatGPT is making headlines and capturing imaginations. Investors have clearly seen the promise, with Microsoft estimated to have invested approximately $10 billion, and recently VC firms including Sequoia Capital, Andreessen Horowitz, Thrive, K2 Global and Founders Fund putting in just over $300 million, at a valuation of between $27 billion to $29 billion. Not surprisingly, the technology ecosystem has been quick to build technologies that leverage the AI platform. At the Winter 2023 YC Demo Day four startups claimed to be building “ChatGPT for X,” and generative AI was the focus of many discussions at this year’s RSA Conference in San Francisco.
Currently, ChatGPT records everything you type into it. Its privacy policy states that when you use ChatGPT, it may collect personal information from your messages, any files you upload, and any feedback you provide. That information can be shared with vendors and service providers and other businesses, affiliates, and legal entities, as well as AI trainers who review conversations. This raises clear concerns about privacy, sharing secrets, and misuse that can lead to data breaches and fraud.
These concerns led Italy and other countries to institute bans and demand that Open AI apply safeguards. Several U.S. K-12 school districts and a few international universities have banned the use of ChatGPT by students and staff. And JPMorgan Chase, Amazon, Verizon and Accenture staff reportedly have been barred from using ChatGPT for work over concerns that employees may submit sensitive information into the chatbot.
Since our inception, Cyera has been working to harness the power that machine learning, AI, and large language models represent. As we have worked with the technology, we frequently discuss how to put safeguards in place to maintain privacy and avoid oversharing. As we witnessed the actions I covered above, we realized that we could help businesses build awareness and put procedures in place that would allow their teams to experiment with ChatGPT securely.
OpenAI has said that they plan to release ChatGPT Business, as a solution “for professionals who need more control over their data as well as enterprises seeking to manage their end users.” One of the primary controls they will put in place will prevent end users’ data from being used to train the AI’s models. And while that appears to be a positive step towards encouraging a more secure use of the technology in the future, our team wanted to provide some immediate benefits to businesses looking to enable their teams to experiment with ChatGPT securely, today.
SafeType™ is currently in community preview. We are gauging interest and looking for input into what would make this a more valuable tool for you and your team. Want the ability to add your sensitive data definitions? Would it help to connect this to Cyera’s data security platform to get the full power of Cyera helping you secure ChatGPT sessions? We want to have these conversations with you. Please join our public Slack community #cyeralabs - and share your thoughts with us.
And if you don't have SafeType yet, download it here!
Gain full visibility
with our Data Risk Assessment.