Concerns around ChatGPT and OpenAI

Let's level set; what are OpenAI and ChatGPT?

OpenAi 

OpenAI is an artificial intelligence research laboratory founded in December 2015 by tech luminaries such as Elon Musk, Greg Brockman, Reid Hoffman, Peter Thiel, and Sam Altman. OpenAI's mission is to promote and develop friendly artificial intelligence in a way that is most likely to benefit humanity as a whole, unconstrained by a need to generate a financial return. Its research aims to "advance digital intelligence in the way that is most likely to benefit humanity in the long term." OpenAI is associated with various activities, such as developing software applications and services powered by machine learning. 

ChatGPT

ChatGPT is a natural language processing (NLP) model developed by OpenAI. It is based on the GPT-2 natural language processing model and is designed to generate conversations in chatbot applications. The ChatGPT model is trained on conversational datasets and can generate natural-sounding responses to queries by predicting the next likely word in a conversation. It can also intelligently track topics over multiple conversation turns and has various features for customizing the generated output.

ChatGPT is an artificial intelligence chatbot designed to improve the security of instant messaging conversations by automatically responding to messages and requests with bot-generated automated responses. ChatGPT can quickly block and reject suspicious messages, reducing the risk of accidental data leakage. It can also help to keep conversations safe from malicious attacks by automatically filtering out unwanted or malicious content. By ensuring that all conversations remain secure and relevant, ChatGPT can help to improve the overall security of conversations and protect users from potential cyber threats. 

A chatbot, GPT, or the Generative Pre-trained Transformer, has recently been involved in security news due to its potential to access confidential information and other sensitive data. GPT is an artificial intelligence system that can read, write, and think like humans. This means it could be used to get protected or secret information that humans wouldn't usually be able to access. People need to know about the security risks associated with GPT, as it could threaten our privacy.

Definitely some tremendous innovation! I've found so much value in ChatGPT filling in the blanks in many areas of my day to day.

So, what are some concerns? 
An interesting article from Check Point Research reads...


OPWNAI: AI THAT CAN SAVE THE DAY OR HACK IT AWAY
Due to ChatGPT, OpenAI’s release of the new interface for its Large Language Model (LLM), there has been an explosion of interest in General AI in the media and on social networks in the last few weeks. This model is used in many applications all over the web and has been praised for its ability to generate well-written code and aid the development process. However, this new technology also brings risks. For instance, lowering the bar for code generation can help less-skilled threat actors effortlessly launch cyber-attacks.

From image generation to writing code, AI models have made tremendous progress in multiple fields, with the famous AlphaGo software beating the top professionals in the game of Go in 2016, and improved speech recognition and machine translation that brought the world virtual assistants such as Siri and Alexa that play a major role in our daily lives.

Recently, public interest in AI spiked due to the release of ChatGPT, a prototype chatbot whose “purpose is to assist with a wide range of tasks and answer questions to the best of my ability.” Unless you’ve been disconnected from social media for the last few weeks, you’ve most likely seen countless images of ChatGPT interactions, from writing poetry to answering programming questions.

However, like any technology, ChatGPT’s increased popularity also carries increased risk. For example, Twitter is replete with examples of malicious code or dialogues generated by ChatGPT. Although OpenAI has invested tremendous effort into stopping abuse of its AI, it can still be used to produce dangerous code.

To illustrate this point, we decided to use ChatGPT and another platform, OpenAI’s Codex, an AI-based system that translates natural language to code, most capable in Python but proficient in other languages. We created a full infection flow and gave ourselves the following restriction: We did not write a single line of code and instead let the AIs do all the work. We only put together the pieces of the puzzle and executed the resulting attack.

We chose to illustrate our point with a single execution flow, a phishing email with a malicious Excel file weaponized with macros that downloads a reverse shell (one of the favorites among cybercrime actors). 
To read more visit Check Point Research.

Summary

The expanding role of LLM and AI in the cyber world is full of opportunity but also comes with risks.  Although the code and infection flow presented in this article can be defended against using simple procedures, this is just an elementary showcase of the impact of AI research on cybersecurity. Multiple scripts can be generated quickly, with slight variations using different wordings. Complicated attack processes can also be automated, using the LLMs APIs to generate other malicious artifacts. Defenders and threat hunters should be vigilant and cautious about adopting this technology quickly; otherwise, our community will be one step behind the attackers.

Reference Links:

ChatGPT
OpenAI
Check Point Research


OpenAI Tools | Quick Cheat Sheets







Popular

Federated user activity made easy

Google Cloud: Container Registry will be replaced by Artifact Registry

Meet Kaniko