Tech

Research shows that fake reports generated by AI are ridiculing experts

Take-out

* AI can generate fake reports that are compelling enough to trick cybersecurity experts.

* The widespread use of these AIs can hinder defense efforts against cyber attacks.

* These systems can cause an AI arms race between false information generators and detectors.

If you are using a social media website such as Facebook or Twitter, you may encounter posts with false alarm warnings. So far, most false alarms-with or without flags-are for the publicImagine the potential for false alarms (false or misleading information) in scientific and technical areas such as cyber security, public safety, and medical care.

Concerns are growing False alarms spread in these important areas As a result of common bias and practice in publishing scientific literature, even peer-reviewed research treatises. graduate student And Undergraduate member In conducting cybersecurity research, we have explored new paths to false information in the scientific community. We have discovered that artificial intelligence systems can generate false information in critical areas such as medicine and defense. This is compelling enough to fool professionals.

Common false alarms are often aimed at damaging the reputation of businesses and celebrities. Incorrect information within the professional community can have terrifying consequences, including providing incorrect medical advice to doctors and patients. This can be life-threatening.

To test this threat, we investigated the implications of disseminating false information in the cybersecurity and medical community. Using an artificial intelligence model called Transformers, we generated fake cybersecurity news and COVID-19 medical research and presented and tested cybersecurity misinformation to cybersecurity experts. It turns out that the misinformation generated by Transformers can deceive cybersecurity experts.

Transformers

Many of the technologies used to identify and manage misinformation rely on artificial intelligence. Computer scientists have so much false information that humans can detect without the help of technology, and AI can be used to quickly fact-check large amounts of false information. AI helps people detect false alarms, but ironically, it has also been used in recent years to generate false alarms.

Transformers like BERT From google GPT Used from OpenAI Natural language processing Understand the text and create translations, summaries and interpretations. They are used in tasks such as storytelling and answering questions, pushing the boundaries of machines that demonstrate human-like abilities in text generation.

Transformers has helped Google and other tech companies. Search engine improvements And it has helped the general public to combat common problems such as: Fight against Writer’s Block..

Transformers can also be used for malicious purposes. Social networks such as Facebook and Twitter are already facing the following challenges: AI-generated fake news Between platforms.

Serious false alarm

According to our research, transformers also pose a threat of false alarms in medical and cybersecurity.To explain how serious this is, we Tweak GPT-2 transformer model above Open online source Discuss cybersecurity vulnerabilities and attack information. Cybersecurity vulnerabilities are weaknesses in computer systems, and cybersecurity attacks are acts that exploit vulnerabilities. For example, if the vulnerability is a vulnerable Facebook password, an attack that exploits it could allow a hacker to find the password and break into your account.

We then seeded the actual cyber threat intelligence sample sentences or phrases into a model to generate a description of the remaining threats. We have presented this generated description to cyber threat hunters who sift through a lot of information about cyber security threats. These experts read the threat description to identify potential attacks and adjust the system’s defenses.

We were surprised at the result. The examples of cybersecurity false alarms we have generated have been able to deceive cyberthreat hunters who are familiar with all types of cybersecurity attacks and vulnerabilities. Imagine this scenario. It is an important part of the cyber threat intelligence related to the aviation industry that we have generated in our research.

This misleading information includes false information about cyber attacks on airlines using sensitive real-time flight data. This false information could prevent cyber analysts from addressing legitimate system vulnerabilities by focusing on fake software bugs. If cyber analysts act on disinformation in real-world scenarios, the airline in question could face serious attacks that exploit real vulnerabilities that have not been addressed.

Similar transformer-based models can generate information in the medical field and deceive medical professionals. During the COVID-19 pandemic, preprints of research treatises that have not yet undergone rigorous review are constantly being uploaded to sites such as: medrXivThey are not only described in the press, but are also used to make public health decisions. This is not the case, but after tweaking the default GPT-2 to a minimum in some COVID-19 related papers, consider the following generated by our model:

The model was able to generate a complete sentence and produce a summary allegedly explaining the side effects of COVID-19 vaccination and the experiments performed. This is annoying for both medical researchers who constantly rely on accurate information to make informed decisions and the general public who often rely on public news to learn important health information. It’s a problem. If this type of misinformation is found to be accurate, it can endanger life by misleading the efforts of scientists conducting biomedical research.

[The Conversation’s most important coronavirus headlines, weekly in a science newsletter]

Arms race for AI false alarms?

While such examples from our research can be confirmed, the false information generated by Transformers has prevented industries such as healthcare and cybersecurity from adopting AI to resolve information overload. I am. For example, automated systems have been developed to extract data from cyber threat intelligence and use that data to inform and train automation systems to recognize potential attacks. When these automated systems process such fake cybersecurity texts, they are less effective at detecting real threats.

We believe the outcome could be an arms race as people who disseminate false information develop better ways to create false information in response to effective ways of recognizing it. I will.

Cybersecurity researchers are continually researching ways to detect false information in a variety of areas. Understanding how to automatically generate false information helps you understand how to recognize false information. For example, auto-generated information often contains subtle grammatical errors that can be trained to be detected by the system. The system can also cross-correlate information from multiple sources to identify claims that lack substantial support from other sources.

Ultimately, everyone should pay more attention to what information is reliable, and hackers trust people, especially if that information is not from reputable news sources or published scientific research. You need to be careful about misusing.

Author: Priyanka Ranade-Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County | Anupam Joshi-Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County | Tim Finin-University of Maryland, Baltimore County , Professor of Computer Science and Electrical Engineering

Research shows that fake reports generated by AI are ridiculing experts

Source link Research shows that fake reports generated by AI are ridiculing experts

Related Articles

Back to top button