ARCHIVES

Original Article

An Adam W-Optimized Vision Transformer Framework with Back propagation Training for Driver Drowsiness Detection for Smart Vehicular Safety

Mani Bakkiaraj P1Dr. K. Karuppasamy2Thenmalar R3Sinduja R4

¹ PG student, Department of Computer Science & Engineering, RVS College Of Engineering & Technology, Coimbatore, Tamilnadu, India. ² Head of Department, Professor, Department of Computer Science & Engineering, RVS College Of Engineering & Technology, Coimbatore, Tamilnadu, India. ³ Project Guide, Assistant Professor, Department of Computer Science & Engineering, RVS College Of Engineering & Technology, Coimbatore, Tamilnadu, India. ⁴ Project Coordinator, Assistant Professor, Department of Computer Science & Engineering, RVS College Of Engineering & Technology, Coimbatore, Tamilnadu, India.

Published Online: January-April 2026

Pages: 273-279

Abstract

When drivers suffer from fatigue-induced cognitive impairment, the result is a tragic surge in worldwide traffic accidents that claim lives and drain vital economic resources. High-speed travel leaves no room for the slow reaction times that come with exhaustion, yet many operators don't realize they are impaired until it is too late. Because physical symptoms often lag behind actual cognitive decline, reactive safety systems simply aren't enough to prevent crashes. To truly move the needle on road safety, we must find ways to flag the very first physiological and behavioural warning signs. This paper focuses on catching fatigue in its earliest stages, allowing for the kind of proactive intervention that can stop an accident before it ever starts. Current methods, like analysing vehicle data often have reliability problems. Can be affected by environmental noise. Vehicle operators fatigue is a concern and fatigue detection can help prevent accidents. Road safety protocols need to be enhanced to reduce the number of accidents caused by fatigue Conventional detection methods have limitations and new approaches are needed to address this issue. To mitigate these deficiencies, this research proposes a real-time Driver Drowsiness Detection System leveraging optical sensing and deep representational learning architectures. The framework continuously acquires facial telemetry via a video stream, isolating ocular regions through facial landmark regression. An Adaptive Eye Characteristic Ratio (AECR) algorithm is employed to quantify prolonged ocular closure. A cardinal indicator of fatigue. Furthermore, a Vision Transformer (ViT) model analyses global spatial dependencies within the ocular features to categorize the operator's alertness state. Upon detection of somnolence, the system initiates immediate multi-modal alerts. A relational database backend logs temporal fatigue metrics for longitudinal performance analytics. Empirical validation under diverse illumination and pose conditions yielded a classification accuracy of approximately 92% with a false positive rate of 5% and sub-second inference latency. This cost-efficient, non-intrusive solution addresses the limitations of legacy systems and offers scalability for commercial fleets and private transport. Future iterations may integrate yawing analysis and infrared imaging for nocturnal efficacy.

Related Articles

2026

Artificial Intelligence in Learning and Teaching

2026

Admin Assist: An AI – Driven Configuration and Orchestration for Enterprise Application

2026

Enhancing Blood Group Identification using pigeon inspired optimization: An Innovative Approach

2026

Eco-Genius: Power Up Smart, Power Down Waste

2026

Crowd-Sourced Disaster Response and Rescue Assistant

2026

Unveiling Deepfake Detection Using Vision Transformers: A Survey and Experimental Study

2026

A Novel Stateful Orchestration Pattern for Data Affinity and Transactional Integrity in Sharded Backend Architectures

2026

Legal Challenges of Agentic AI Systems in Education and Employment Decision-Making

2026

New-Hybrid Soft Computing Model for Stock Market Predictions

2026

Human Emotion Distribution Learning from Face Images Using CNN