BLOG

Resources for Educators
& Professionals

 

Artificial Intelligence is Helping Hackers Write Malicious Code—Cybersecurity Educators Have a Role to Play to Stop Them

by  Jones & Bartlett Learning     Mar 17, 2023
typing_keyboard_dark

While artificial intelligence (AI) and machine learning are certainly nothing new, the widespread availability of AI chatbots (such as the ChatGPT platform) has renewed the popularity of this technology in 2023. In fact, one Salesforce study found that 77% of customers report that AI chatbots will positively transform their expectations of companies over the next five years. When used as intended, AI chatbots like ChatGPT can help businesses and individuals alike streamline communication, automate repetitive tasks, and even generate content from blog posts to poetry.

Unfortunately, it hasn't taken long for threat actors to manipulate AI chatbot software and use it to generate malicious code. This revelation has left many cybersecurity experts and instructors questioning the benefits and keying in on the limitations of this kind of software—as well as scrambling to launch solutions that will keep web users safe. Read on to find out what role cybersecurity instructors can play in thwarting the efforts of hackers who want to use AI for evil instead of good.

Despite Safeguards, Hackers Are Creating Malware With Artificial Intelligence

When OpenAI designed ChatGPT, its developers took into consideration that users would inherently try to use this kind of technology for nefarious purposes. As a result, OpenAI installed failsafes that were meant to prevent the software from spitting out any kind of code that could be used maliciously. Likewise, ChatGPT's terms of service specifically prohibit the creation of any content that "attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm."

Unfortunately, those failsafes proved ineffective. Early in 2023, it was revealed that underground hacking communities had successfully found ways around ChatGPT's protections and were able to generate a Python-based script rather easily. This script, which can encrypt, decrypt, and copy files, could easily be used as ransomware by a threat actor.

OpenAI has released a later model, GPT-4, which is supposedly more difficult to lead astray or trick. Time will tell if the latest iteration will help or hinder the efforts of hackers.

Potential Implications for Cybersecurity and Cybercrime

There are a few major implications to consider here. In reality, it didn't take underground hackers very long at all to find ways around OpenAI’s best intentions. This is troublesome because now, bad actors may try to take this technology to the next level by creating even more dangerous and malicious code.

Likewise, the ability to generate malicious code through an AI chatbot makes it easier and more accessible for just about anybody to become a hacker—even without extensive coding knowledge or experience. As a result, instances of cybersecurity attacks (and attack attempts) could spike.

For cybersecurity professionals, this also means that the race is on to develop further safeguards before hackers can act upon existing vulnerabilities. This is especially true as the full potential for this software to carry out malicious acts is not yet known.

Generating ransomware code is likely just the beginning—and there have already been reports that ChatGPT is being used to carry out other illicit activities (such as creating Dark Web marketplace scripts).

What Does This Mean for Cybersecurity Educators?

For those teaching in the cybersecurity industry, the widespread availability of AI chatbot tools may very well change the future of the field. With the potential for creating malicious code falling into more hands than ever, cybersecurity instructors will likely need to rethink the ways in which they're teaching. Specifically, educators may likely need to incorporate more instruction on artificial intelligence, machine learning, malicious coding as well as ethical hacking.

The use of real-world simulations will also prove valuable in cybersecurity training and education, especially as it pertains to the growing use of AI software. Since 2010, Jones & Bartlett Learning has been offering Cloud Labs, a set of solutions designed to provide fully immersive mock infrastructures, allowing students to learn and practice foundational cybersecurity skills. These real-world simulations may also prove useful as a means of offering hands-on training to students that is specific to this emerging issue.

The Cybersecurity Field Needs to Stay One Step Ahead

While AI chatbot technology promises to make our lives easier in a lot of ways, we're already beginning to see how this software can be used for malicious purposes. Cybersecurity experts will need to stay one step ahead of threat actors to protect end users—and educators face a growing responsibility to prepare students for these changing times with adapted cybersecurity instruction. Tools and resources provided by Jones & Bartlett Learning, ranging from virtual lab solutions to texts like Ethical Hacking: Techniques, Tools, and Countermeasures, Fourth Edition enable cybersecurity educators to do just that.

Request Your Review Copy

Read More:

Stay Connected

Categories

Clear

Search Blogs

Featured Posts

Artificial Intelligence is Helping Hackers Write Malicious Code—Cybersecurity Educators Have a Role to Play to Stop Them

by  Jones & Bartlett Learning     Mar 17, 2023
typing_keyboard_dark

While artificial intelligence (AI) and machine learning are certainly nothing new, the widespread availability of AI chatbots (such as the ChatGPT platform) has renewed the popularity of this technology in 2023. In fact, one Salesforce study found that 77% of customers report that AI chatbots will positively transform their expectations of companies over the next five years. When used as intended, AI chatbots like ChatGPT can help businesses and individuals alike streamline communication, automate repetitive tasks, and even generate content from blog posts to poetry.

Unfortunately, it hasn't taken long for threat actors to manipulate AI chatbot software and use it to generate malicious code. This revelation has left many cybersecurity experts and instructors questioning the benefits and keying in on the limitations of this kind of software—as well as scrambling to launch solutions that will keep web users safe. Read on to find out what role cybersecurity instructors can play in thwarting the efforts of hackers who want to use AI for evil instead of good.

Despite Safeguards, Hackers Are Creating Malware With Artificial Intelligence

When OpenAI designed ChatGPT, its developers took into consideration that users would inherently try to use this kind of technology for nefarious purposes. As a result, OpenAI installed failsafes that were meant to prevent the software from spitting out any kind of code that could be used maliciously. Likewise, ChatGPT's terms of service specifically prohibit the creation of any content that "attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm."

Unfortunately, those failsafes proved ineffective. Early in 2023, it was revealed that underground hacking communities had successfully found ways around ChatGPT's protections and were able to generate a Python-based script rather easily. This script, which can encrypt, decrypt, and copy files, could easily be used as ransomware by a threat actor.

OpenAI has released a later model, GPT-4, which is supposedly more difficult to lead astray or trick. Time will tell if the latest iteration will help or hinder the efforts of hackers.

Potential Implications for Cybersecurity and Cybercrime

There are a few major implications to consider here. In reality, it didn't take underground hackers very long at all to find ways around OpenAI’s best intentions. This is troublesome because now, bad actors may try to take this technology to the next level by creating even more dangerous and malicious code.

Likewise, the ability to generate malicious code through an AI chatbot makes it easier and more accessible for just about anybody to become a hacker—even without extensive coding knowledge or experience. As a result, instances of cybersecurity attacks (and attack attempts) could spike.

For cybersecurity professionals, this also means that the race is on to develop further safeguards before hackers can act upon existing vulnerabilities. This is especially true as the full potential for this software to carry out malicious acts is not yet known.

Generating ransomware code is likely just the beginning—and there have already been reports that ChatGPT is being used to carry out other illicit activities (such as creating Dark Web marketplace scripts).

What Does This Mean for Cybersecurity Educators?

For those teaching in the cybersecurity industry, the widespread availability of AI chatbot tools may very well change the future of the field. With the potential for creating malicious code falling into more hands than ever, cybersecurity instructors will likely need to rethink the ways in which they're teaching. Specifically, educators may likely need to incorporate more instruction on artificial intelligence, machine learning, malicious coding as well as ethical hacking.

The use of real-world simulations will also prove valuable in cybersecurity training and education, especially as it pertains to the growing use of AI software. Since 2010, Jones & Bartlett Learning has been offering Cloud Labs, a set of solutions designed to provide fully immersive mock infrastructures, allowing students to learn and practice foundational cybersecurity skills. These real-world simulations may also prove useful as a means of offering hands-on training to students that is specific to this emerging issue.

The Cybersecurity Field Needs to Stay One Step Ahead

While AI chatbot technology promises to make our lives easier in a lot of ways, we're already beginning to see how this software can be used for malicious purposes. Cybersecurity experts will need to stay one step ahead of threat actors to protect end users—and educators face a growing responsibility to prepare students for these changing times with adapted cybersecurity instruction. Tools and resources provided by Jones & Bartlett Learning, ranging from virtual lab solutions to texts like Ethical Hacking: Techniques, Tools, and Countermeasures, Fourth Edition enable cybersecurity educators to do just that.

Request Your Review Copy

Read More:

Tags

Clear