Skip to main navigation Skip to search Skip to main content

Improvement of Text CAPTCHA Codes by Comparing Adversarial Techniques Against Deep Learning Model Attacks

Research output: Chapter in Book/Report/Conference proceedingPaper (Conference contribution)peer-review

Abstract

CAPTCHAs are essential tools in computer security to distinguish between humans and automated programs. Although widely used in web applications to prevent unauthorized access and spam, advances in artificial intelligence have increased attacks against these systems. This study focuses on improving the security of CAPTCHAs using adversarial techniques such as FGSM and PGD, exploring their effectiveness against a deep learning model. Furthermore, a generative adversarial network is employed to strengthen resistance to these attacks. The research also includes human validation to evaluate the robustness of different types of CAPTCHAs against simulated attacks. Our findings demonstrate that while adversarial modifications enhance security, they require careful calibration to avoid excessive usability degradation.

Original languageEnglish
Title of host publicationArtificial Intelligence, COMIA 2025 - 17th Mexican Congress, Proceedings
EditorsLourdes Martínez-Villaseñor, Bella Martínez-Seis, Obdulia Pichardo
PublisherSpringer Science and Business Media Deutschland GmbH
Pages133-144
Number of pages12
ISBN (Print)9783031979125
DOIs
StatePublished - 2025
Event17th Mexican Conference on Artificial Intelligence, COMIA 2025 - Mexico City, Mexico
Duration: 12 May 202516 May 2025

Publication series

NameCommunications in Computer and Information Science
Volume2554 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference17th Mexican Conference on Artificial Intelligence, COMIA 2025
Country/TerritoryMexico
CityMexico City
Period12/05/2516/05/25

Fingerprint

Dive into the research topics of 'Improvement of Text CAPTCHA Codes by Comparing Adversarial Techniques Against Deep Learning Model Attacks'. Together they form a unique fingerprint.

Cite this