ChatGPT was used by Samsung’s semiconductor business to help diagnose source code problems. However, when utilising the AI chatbot, they unintentionally leaked private data, including notes from an internal meeting regarding their hardware and the source code for a new programme. Three incidences were recorded in a month, so sadly this wasn’t an isolated incident.
Samsung is currently working on creating its own AI solution, similar to ChatGPT, that is just for internal employee usage in order to prevent future breaches. This will make it possible for their fab workers to get quick assistance while safeguarding critical company data. Nevertheless, this new tool can only handle prompts that are 1024 bytes or smaller.
Because ChatGPT is operated by a third-party business that makes use of external servers, the issue materialised. Because of this, data was exposed when Samsung entered their code, test sequences, and internal meeting content into the programme. Samsung swiftly alerted its management and staff to the possibility of confidential information being compromised. Samsung has prohibited its employees from using the chatbot prior to March 11.
Not just Samsung is battling this problem, though. Until they establish a clear policy surrounding the usage of generative AI, many businesses are prohibiting the use of ChatGPT. Despite offering an opt-out option for user data collection, ChatGPT’s developer may still have access to the data provided to the service.
What measures has Samsung taken to prevent future leaks?
Samsung has taken a number of measures to ensure that data leaks like those brought on by ChatGPT don’t happen again. First of all, the business has warned its staff to use caution while providing data to ChatGPT. Additionally, they have set a limit of 1024 bytes on the size of questions that can be sent to the service.
Additionally, Samsung is creating a ChatGPT-like AI tool just for internal employee use. This would guarantee fast assistance for fab workers while protecting confidential company data. Before the creation of this technology, Samsung had cautioned staff members about the dangers of utilising ChatGPT. They have also emphasised how the corporation cannot control the data once it has been given away because data entered into ChatGPT is transferred to and kept on external servers.
What were the consequences for the employees who leaked confidential data?
The information that is now accessible does not make it clear what would happen to the Samsung employees that gave ChatGPT access to private information. However, Samsung Electronics is taking action to stop additional disclosures of private information through ChatGPT, such as limiting the amount of submitted queries to 1024 bytes. The business has also warned its staff members about the dangers of utilising ChatGPT.
Additionally, Samsung Semiconductor is creating a proprietary AI tool for internal employee use that will only be able to comprehend prompts smaller than 1024 bytes. These initiatives show that Samsung takes the security of its private data seriously and is taking steps to reduce the dangers involved in using ChatGPT.
What type of confidential information was leaked?
It appears that three separate instances of private company information were disclosed by Samsung workers to ChatGPT. Data that was leaked included notes from internal meetings, source code for a new programme, and details about the productivity and yields of fabs. One specific example involves a Samsung Semiconductor employee who unintentionally gave the source code of a top-secret application to an external AI chatbot developer, OpenAI, by submitting it to ChatGPT to address bugs. As ChatGPT keeps the data it gets, OpenAI now has access to this private information.
How many times did Samsung employees leak confidential data to ChatGPT?
According to the search results, Samsung employees leaked confidential company information to ChatGPT on at least three occasions.