S t. T h o ma s’ C o l l e g e o f E n g i n e e r ing a n d T e c h no l o g y Drowsy Driver Detection System Pr e pa r e d b y Rohit Kumar Choubey 12200219025 Alok Kumar Jha 122002190 21 Dhiraj Ojha 122002190 17 Sagnik Sinha 122002190 11 U n d e r t h e g u i d a n c e o f Dr. Ranjit Ghosal Assistant Professor, Departmen t of Information Technology P roj e ct R e p ort Sub m i tted i n t h e p a r t i a l f u l f ill m e n t o f t h e r e qu i r e m e n t f o r t h e d e g r e e o f B T ec h i n I n f o rm a t i o n T e c hno l o g y D e p a r t m e nt of I n f or m a t i on T ec h n o l o g y A ff ili a t e d t o M au l an a A bu l K a l a m A z a d U n i v e r s i t y o f T e c hno l og y , W e s t B enga l M ay , 2023 S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 1 St T ho m a s’ C o ll e g e o f E n g i n e e r i n g a n d T e c h no l o g y Th i s i s to c e r t i f y t ha t t h e w o r k i n p r e pa r i n g t h e p r oj e c t e n t i t le d Drowsy Driver Detection System ha s b ee n c a rr ie d ou t b y Rohit Kumar Choubey, Alok Kumar Jha, Dhiraj Ojha, and Sagnik Sinha und e r m y gu i dan c e du r i n g t h e s e ss i o n 2022 - 2023 an d a cce p ted i n p a r t i a l f u l f ill m e n t o f t h e r e qu i r e m e n t f o r t h e d e g r e e o f Btech. In Information Technology S i gna t u r e S i gna t u r e Dr. Arindam Chakravorty Dr. Ranjit Ghoshal H e a d , D e p a r t m e n t o f I n f o r m a t i o n T e c h n o l o gy D e p a r t me n t o f I n f o rm a t i o n T e ch no l o gy De pa r t m e n t o f I n f o rm a t i o n T e c hno l o g y S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 2 St T ho m a s’ C o ll e g e o f E n g i n e e r i n g a n d T e c h no l o g y A c kn o w l e d g e m e nt The success and final outcome of this project required a lot of guidance and assistance from many people and we are extremely fortunate to have got this all along the completion of our project work. Whatever we have done is only due to such guidance and assistance and we would not forget to thank them. On the very outset we would like to extend our sincere & heartfelt obligation towards all the personages who gave us the golden opportunity to do this wonderful project on the topic “ Drowsy Driver Detection System ” Without their active g uidance, help, cooperation & encouragement, we would not have made headway in this project. We owe our profound gratitude to our project mentor Dr. Ranjit Ghoshal for conscientious guidance and encouragement and who took keen interest in our project work and guided us all along, till the completion of our project by providing all the necessary information for developing the project. We extend our gratitude to St.Th omas’ College of Engineering and Technology, our Principal Dr.Shila Ghosh and the Information and Technology Department for giving us this opportunity. We also acknowledge with a deep sense of reverence, our gratitude towards our parents and family, who ha ve always supported us morally as well as economically. We are thankful to and fortunate enough to get constant encouragement, support and guidance from all Teaching staff of the Department of Information and Technology which helped us in successfully comp leting our project work. At last but not least gratitude goes to all my friends who directly or indirectly helped us a lot infinalizing this project within the limited time frame to complete this project report. S i gna t u r e wi th da te S i gna t u r e wi th da te S i gna t u r e wi th da te S i gna t u r e wi th da te Rohit Kumar Choubey Alok Kumar Jha Dhiraj Ojha Sagnik Sinha De pa r t m e n t o f I n f o rm a t i o n T e c hno l o g y S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 3 V i s i o n & M i s s io n ( S t T h o m a s’ Co l l e g e o f E n g i n e er i n g & Tec h n o l o g y ) Vi s ion T o e v o l v e i t s e lf in t o a n in d u s t r y o r i e nted r e s ea r c h b a s e d r ec o gniz e d h u b o f c r ea t ive s ol u t i o n s in v a r i o u s f i e lds o f e ngine e r ing b y e s ta b l i s hing p r o g r e s s i v e te a c hin g - l ea r ning p r o ce s s w i t h a n ul t i m a te obje c ti v e o f m ee t ing te c h n o l o g i c a l c h a l l e n g e s f ace d b y t h e n a t i o n a n d t h e s o c i e t y M i s s i o n • T o c r ea te oppo r t un i t i e s f o r s t u d e n ts a nd f a c ul t y m e m b e r s in ac q ui r i n g p r o f e s s i o n a l k n o w le d g e a n d d e v e l o p ing s o c i a l a t t i t u d e s w i t h e t h i c a l a n d m o r a l v a l u e s • T o e n h a n c e t h e q u a l i t y o f e ngine e r ing e d u ca t i o n t h r o ugh acce ss i b l e , c o m p r e h e n s i v e , in d u s t r y a n d r e s ea r c h o r i e nted te ac hin g - l ea r ning p r o ce ss • T o s a t i s f y t he e v e r - c h a nging n ee ds o f t h e n a t i o n f or e v o lu t i o n a n d a b s o r p t i o n o f s u s t a i n a b l e a n d e nvi r o n m e n t f r i e n d ly te c h n o l o g i e s S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 4 Vi s ion & M i s s ion ( D e p a r tm e n t o f I n f o r m a t i on Te c hn o l ogy ) Vi s ion T o p r o m o t e t h e a d v a n c e m e n t o f l ea r n i ng i n I n f o r ma t i o n T ec h n o l og y t h r o ugh r e s ea r c h o r i e n t e d d i s s e m i n a t i o n o f k n o w l e dge wh i c h w i l l l ea d t o i nn o v a t i v e a p p l i ca t i o n s o f i n f o r m a t i o n i n I n d u s t r y a n d S o c i e t y M i ss ion • T o i n c u b a t e s t ud e n t s g r o w i n t o i n d u s t r y r ea d y p r o f e ss i o n a l s , p r o f i c i e n t r e s e a r c h s c h o l a r s a n d e n t e r p r i s i n g e n t r e p r e n e u r s • T o c r e a t e a l ea r n e r - c e n t r i c e nv i r o n m e n t t h a t m ot i v a t e s t h e s t ud e n t s i n a d o p t i n g e m e r g i n g t ec hn o l og i e s o f t h e r a p i d l y c h a n g i ng i n f o r m a t i o n s o c i e t y • T o p r o m o t e s o c i a l , e nv i r o n m e n t a l a n d t ec h n o l og i ca l r e s p o n s i v e n e s s a m o n g t h e m e m b e r s o f t h e f ac u l t y a n d s t ud e n t s P r o g r am E d u c a t i o n al O bj ec t ives ( P E O) G r a du a t e s o f I n f o r m a t i o n T ec h n o l og y Pr o g r a m s h a ll P E O 1 : E xh i b i t t h e s k i l l s a n d k n o w l e dge r e q u i r e d to d e s i g n , d e v e l o p a n d i m p l e m e n t I T s o l u t i o n s f o r r e a l l i f e p r o b l e m s P E O 2 : E x c e l i n p r o f e s s i o n a l ca r ee r , h i g h e r e du ca t i o n a n d r e s ea r c h P E O 3 : D e m o n s t r a t e p r o f e ss i o n a l i s m , e n t r e p r e n e u r s h i p, e t h i ca l b e h a v i o r , c o mm u n i ca t i o n s k i l ls a n d c o ll a b o r a t i ve t ea m w o r k to a d a pt t h e e m e r g i ng t r e n ds by e n g a g i ng i n l i f e l o n g l ea r n i ng. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 5 P R OG RA M O U T C O M E S ( P O s ) E n g i n e e ri n g Gr a d u at e s w i l l b e a b l e t o : 1. E n g i n e e ri n g k n o w le d g e : A p p l y t h e k n o w l e dge of ma t h e m a ti c s , s c i e n ce , e ng i n ee r i n g fu n d a m e n t a l s , a n d a n e n g i n e e r i n g s p ec ia l i z a t i o n t o t h e s o l u t i o n o f c o m p l e x e ng i n e e r i n g p r ob l e m s 2. P r o b le m a n a l y s i s : Id e n ti f y , f o r m u l at e , r e v i e w r e s e a r c h l it e r a t u r e , a nd a n al y z e c o m p l e x e n g i n ee r i n g p r ob l e m s r e a c h i ng s u b s t a n t i a t e d c o n c l u s i o n s u s i ng f i r s t p r i n c i p l e s o f ma t h em ati c s , n a t u r a l s c i e n ce s , a n d e n g i n e e r i n g s c i e n ce s 3. D e s i g n / d e v e l o p m e n t o f s o l u t i o n s : D e s i gn s o l u t i o n s fo r c o m p l e x e n g i n ee r i ng p r ob l e m s a n d d e s i g n s y s t e m c o m p o n e n t s o r p r o c e ss e s t h a t m ee t t h e s p ec i f i e d n ee ds w it h a p p r o pr i a t e c o n s i d e r a ti o n fo r t h e pu b li c h e al t h a n d s a f e t y , a nd t h e c u l t u r a l , s o c i e tal , a nd e n v i r o n m e n ta l c o n s i d e r a ti o n s 4. C o n d u c t i n v e s t i gat i o n s o f c o m p le x p r o b le m s : U s e r e s e a r c h - b a s e d k n o w l e dge a n d r e s ea r c h m et h o ds i n c l u d i n g d e s i gn o f e xp e r im e n t s , a n al y s i s a n d i n t e r p r et ati o n o f d at a , a n d s y n t h e s i s o f t h e i n fo r ma t i o n t o p r ov i de v ali d c o n c l u s i o n s 5. M o d er n t oo l u s ag e : C r ea t e , s e l ec t , a n d a pp l y a pp r o pr i a t e t e c hn i qu e s , r e s o u r ce s , a n d m o d e rn e n g i n e e r i n g a nd IT t oo l s i n c l ud i n g p r e d i c ti o n a n d m o d e l i n g t o c o m p l e x e n g i n ee r i ng a c ti v iti e s w it h a n u n d e r s ta n d i n g o f t h e li m i t a t i o n s 6. T h e e n g i n e e r a n d s o cie t y : A pp l y r e a s o n i ng i n fo r m e d by t h e c o n t e x t u a l k n o w l e dge t o a ss e s s s o c i e tal , h ea l t h, s a f e t y , l e g a l a n d c u l t u r a l i ss u e s a n d t h e c o n s e qu e nt r e s p o n s i b i l i t i e s r e l e v a n t t o t h e p r of e ss i o n a l e n g i n e e r i n g pr a c ti ce 7. E n v ir o n m e n t a n d s u s ta i n a b ili t y : U n d e r s t a n d t h e im p a c t o f t h e p r of e ss i o n a l e n g i n ee r i ng s o l u ti o n s i n s o c i e ta l a n d e n v i r o n m e n t a l c o n t e x t s , a nd d e m o n s t r a t e t h e k n o w l e dge o f , a n d n ee d fo r s u s t a i n a b l e d e v e l o p m e n t 8. Et h ic s : A pp l y e t h i ca l p r i n c i p l e s a n d c o m m i t t o p r o f e ss i on a l et h i c s a n d r e s p o n s i b il i ti e s a nd n o r m s o f t h e e n g i n e e r i n g p r a c ti ce 9. Ind i v i d u al a n d t e am w o r k : F u n c t i o n e ff e c ti v e l y a s a n i n d i v i du al , a n d a s a m e m b e r o r l ea d e r i n d i v e r s e t e am s , a n d i n m u l t i d i s c i p l i n a r y s e tti n g s 10. C o m m un ic at i o n : C o mm u n i c at e e f f e c ti v e l y o n c o m p l e x e n g i n ee r i ng a c ti v i t i e s w it h t h e e n g i n ee r i n g c o mm u n it y a n d w i t h s o c i e t y a t l a r g e , s u c h a s , b e i ng a b l e t o c o m p r e h e n d a nd w r it e e f f ec ti ve r e p o r t s a n d d e s i g n d o c u m e n t a ti o n, m a ke e f f ec ti ve p r e s e n ta t i o n s , a n d g i v e a n d r e ce i ve c l e a r i n s t r u ct i o n s 11. P r o j e c t m a n ag e m e n t a n d f i n a n c e : D e m o n s t r a t e k n o w l e dge a n d u n d e r s t a nd i n g o f t h e e n g i n ee r i ng a nd m a n a g e m e n t p r i n c i p l e s a n d a p p l y t h e s e t o o n e ’s o w n w o r k, a s a m e m b e r a n d l ea d e r i n a t e a m , t o m a n a ge p r o jec t s a nd i n m u lt i d i s c i p li n a r y e n v i r o n m e n t s 12. L i f e - l o n g le a r n i n g : R e c o g n iz e t h e n ee d f o r , a n d h a v e t h e pr e p a r a t i o n a nd a b il i t y t o e n g a g e i n i n d e p e n d e n t a n d li f e - l o n g l ea r n i ng i n t h e b r o a d e s t c o n t e xt o f t ec hn o l o g i c a l c h a n g e S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 6 P r o j ec t M a pp i n g wi t h P r o g r am O ut c o m e s P O1 P O2 P O3 P O4 P O5 P O6 P O7 P O8 P O9 P O10 P O11 P O12 3 3 3 3 3 3 1 2 3 3 3 2 E n t e r c o r r e l at i o n l e v el s 1 , 2 o r 3 as d e f i n e d b el o w : 1: S l i g h t ( L o w ) 2: M o d er a t e ( Me d i u m ) 3: S u bs ta n t i al ( Hi g h ) J u s t i f ica t ion : In this project, we can apply our engineering knowledge, system components, and use modern IT tools, including prediction and modeling. By this project, we understand professional engineering solutions in social and environmental contexts. Demonstrate knowledge and understanding of the engineering community and with society at large, such as being able to comprehen d and design documentation, make effective presentations and give clear instructions. Recognize the need for and have the preparation and ability to engage independent and life - long learning in the broad context of technological change. Teamwork (PO9): An ability to function on multi - disciplinary teams. Communication (PO10): Communication An ability to communicate and present effectively. Project Management (PO11): Project management and finance an ability to use the modern engineering too ls, techniques, skills and management principles to do work as a member and leader team, to manage projects in multi - disciplinary environments. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 7 P r o g r am S p ec i f ic O u t c o m e s ( P S O s ) A t t h e e n d o f t h e p r o g r a m t h e s t ud e n t s w i ll b e a ble to P S O1 ( P r o g r a m m i n g ) : A p p ly t h e p r o g r a m m i n g k n o w l e d g e t o b u il d a n e f f i c i e n t a n d e f f ec t i v e s o l u t i o n o f t h e p r o b l e m w i t h a n e rr o r f r e e , w e l l d o c u m e n t e d a n d r e u s a b l e c o d e , u s e r f r i e n dl y i n t e r f a c e a n d w e l l o r g a n i z e d d a t a b a s e P S O2 ( M u l t i m e d ia Au t h o r i n g ) : C r e a t e a m u l t im e d i a p r o du c t u s i n g p r o p e r m e t a p h o r , d e s i g n i n g e f f ec t i v e n a v i g a t i o n f o l l o w i ng h u man c o m pu t e r i n t e r f a c e r u l e s w i t h p r o p e r i n t e r ac t i v i t y , w h i c h w i ll be u s e f u l f o r e d u ca t i o n a l , s o c i a l a n d b u s i n e s s pu r p o s e P S O3 ( S o f t wa r e E n gi n e e r i n g ) : U n d e r s t a nd a n d a n a l y z e a b i g c o m p l e x p r o b l e m a nd d ec o m p o s e i t i n t o r e l a t i v e ly s m a l l e r a n d i nd e p e nd e n t m o d u l e s e i t h e r a l g o r i t h mi ca l ly o r i n a n o b j ec t o r i e n t e d w a y c h oo s i n g c o rr e c t li f e c y c l e m o d e l a n d u s i ng e ff ec t i v e t e s t ca s e s P r o j ec t M a pp i n g wi t h P r o g r am Sp ec i f ic O ut c o m e s P S O1 P S O2 P S O3 3 - 3 E n t e r c o r r e l at i o n l e v el s 1 , 2 o r 3 as d e f i n e d b el o w : 1: S l i g h t ( L o w ) 2: M o d er ate ( Me d i u m ) 3: S u bs ta n t i al ( H i g h ) J u s t i f ica t ion : This project requires me to apply programming knowledge to build an efficient and effective solution of the problem with an error free, well documented and reusable code, user friendly interface and well - organized database. Hence, the project substantially satisfies PSO1. Creation of multimedia enabled web solutions using information in different forms for business, education and the society at large isn’t applicable in this project, hence, PSO2 stands irrelevant/ not applicable. Understanding and analyzing a big complex problem and decomposin g it into relatively smaller and independent modules algorithmically is done in this project, hence, PSO3 is satisfied substantially S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 8 St T ho m a s’ C o ll e g e o f E n g i n e e r i n g a n d T e c h no l o g y I n d e x Top i c P ag e N o I nt r o d u c t i o n 1 C h a pter 1 2 o nwa r ds 1 1 Pr o b l e m S t a t e m e n t 1 2 Pr o b l e m D e f i n i t i o n 1 3 O bje c t i ve 1 4 T oo l s a n d P l a t f o r m 1 5 Br i ef D i s c u ss i o n o n P r o b l em C h a pter 2 C o n c epts a n d Pr o b l e m a n a l y s i s C h a pter 3 D e s i g n a n d M eth o d o l o g y C h a pter 4 S a m p l e C o des C h a pter 5 T e s t i n g , R e s u l t s , D i s c u ss i o n o n R e s u l ts C h a pter 6 6 1 S c o pe f o r f ut u r e i m p r o v e m ent 6 2 C o n c l u s i o n A nne x u r e R e f e r e n c es / B i b l i o g r a p hy De pa r t m e n t o f I n f o rm a t i o n T e c hno l o g y S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 9 Introduction: Drowsy driving is a serious safety concern that can lead to accidents and fatalities on the roads. It occurs when a driver becomes fatigued or drowsy, resulting in reduced attention, slower reaction times, and impaired decision - making abilities. Drowsiness can be caused by various factors, including sleep deprivation, long hours of driving, medication, or underlying health conditions. To address the issue of drowsy driving, drowsy driver detection systems have been developed. These systems aim to monitor the driver's physiological and behavioral indicators to detect signs of drowsiness in real - time. By alerting the driver or triggering automated safety measures, these systems help mitigate the risks associated with drowsy driving and prevent potential accidents. Automotive population is increasing exponentially in our country . The b iggest problem regarding the increased use of vehicles is the rising number of road accidents. Road accidents are undoubtedly a global menace in our country. The frequency of road accidents in India is among the highest in the world. According to th e reports of the National Crime Records Bureau (NCRB) about 135,000 road accidents - related deaths occur every year in India. The Global Status Report on Road Safety published by the World Health Organization (WHO) identified the major causes of road acci dents are due errors and carelessness of the driver. Driver sleepiness, alcoholism and carelessness are the key contributions in the accident scenario. The fatalities, associated expenses and related dangers have been recognized as serious threat to the co untry. All these factors led to the development of Intelligent Transportation Systems (ITS). ITS includes driver assistance systems like Adaptive Cruise Control, Park Assistance Systems, Pedestrian Detection Systems, Intelligent Headlights, Blind Spot De tection Systems, etc. Taking into account of these factors, the driver’s state is a major challenge for designing advanced driver assistance systems. Driver errors and carelessness contribute most of the road accidents occurring nowadays. The major drive r errors are caused by drowsiness, drunken and reckless behaviour of the driver. The resulted errors and mistakes contribute much loss to the humanity. In order to minimize the effects of driver abnormalities, a system for abnormality monitoring has to be inbuilt with the vehicle. The real time detection of these behaviours is a serious issue regarding the design of advanced safety systems in automobiles. This project focuses on a driver abnormality detection system in ITS on automotive domain. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 10 1.1 Proble m Statement: Drowsy Driver Detection System using Machine Learning. 1.2 Problem Definition: The driver of a vehicle is categorized as drowsy or not based on classification algorithms 1.3 Objective: The aim of this project is to build a model which correctly recognizes the driver of a vehicle as fit to drive or not using Machine Learning. This project extends to building an interface which automobiles can use to detect drowsiness in drivers and prevent accidents 1.4 Tools and Platform: Drowsy driver detection systems can be developed using various tools and platforms depending on the specific requirements and technologies used. Here are some common tools and platforms used in the development of drowsy driver detection systems: Programming Languages: • Python: Python is a popular language for developing computer vision and machine learnin g algorithms, which are commonly used in drowsy driver detection systems. • Javascript: JavaScript is a high - level programming language commonly used for web development. While it is not typically associated with deep learning or neural network training, Jav aScript can still be utilized for certain aspects of machine learning, including data preprocessing, model deployment, and creating interactive visualizations. Machine Learning and Computer Vision Libraries: • OpenCV: OpenCV is a widely used open - source library that provides a comprehensive set of computer vision functions and algorithms, including face detection, eye tracking, and image processing. • Kaggle: Kaggle is a popular online platform for data science and machi ne learning competitions, as well as a community for data scientists and machine learning enthusiasts. It provides a wide range of datasets, code notebooks, and competitions for practice, learning, and collaboration. • TensorFlow: TensorFlow is a popular mac hine learning framework that offers a variety of tools for training and deploying machine learning models, including deep neural networks used in drowsy driver detection. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 11 • Numpy: NumPy is a fundamental library for numerical computing in Python. It stands fo r "Numerical Python" and provides a powerful multi - dimensional array object, along with a collection of functions for performing various mathematical operations on arrays efficiently. NumPy is widely used in scientific computing, data analysis, and machine learning applications. • Keras: Keras is a high - level deep learning framework written in Python. It provides a user - friendly and intuitive interface for building and training neural networks. Keras is designed to be easy to use, modular, and extensible, mak ing it a popular choice for both beginners and experienced deep learning practitioners. Hardware Platforms: Specifications: ▪ GPU T4 * 2(GPU Memory: 0 Bytes, Max 14.8 GB) ▪ Hard Disk(4 GB, Max 73.1 GB) ▪ RAM(600.8 GB, Max 13 GB) Integrated Development Environments (IDEs): ▪ Google Collab: Google Colab is an online platform provided by Google that offers a free cloud - based environment for running and sharing Jupyter Notebook - based code. It is a popular choice among data scientists and machine learning practitioners as it provides access to powerful hardware resources and offers seamless integration with other Google services. These are just a few examples of tools and platforms commonly used in the development of drowsy driver detection systems. The choice of tools and platforms may vary bas ed on the specific requirements, expertise, and preferences of the development team. 1.5 Brief Discussion on Problem: Driver drowsiness detection is a car safety technology which helps prevent accidents caused by the driver getting drowsy. Various studies have suggested that around 20% of all road accidents are fatigue - related, up to 50% on certain roads.Some of the current systems learn driver patterns and can de tect when a driver is becoming drowsy.[1] Nowadays drowsiness of drivers is one of the main reasons behind road accidents. It is natural for the drivers who take long drives to doze off behind the steering wheel. In this article, we will build a drowsiness detection system that will alert the driver as soon as he fell asleep. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 12 2. Concepts and Problem Analysis Concepts Analysis: A drowsy driver detection system is designed to identify signs of drowsiness or fatigue in a driver and alert them to prevent pote ntial accidents. The system typically employs various sensors and algorithms to monitor the driver's behavior, physiological indicators, and vehicle movement patterns. By analyzing this data, the system can detect signs of drowsiness and take appropriate a ctions to mitigate the risks. 1. Eye - Tracking: The system tracks the driver's eye movements and analyzes factors such as blink rate, eye closure duration, and gaze direction. Drowsy drivers tend to have longer eye closure durations or exhibit abnormal eye movement patterns. 2. Facial Analysis: By using c ameras or infrared sensors, the system can analyze the driver's facial expressions and detect signs of fatigue, such as drooping eyelids or yawning. 3. Steering Behavior: The system can monitor the driver's steering patterns and detect deviations or erratic m ovements that may indicate drowsiness. 4. Lane Departure Warning: By analyzing the vehicle's position relative to the lane markings, the system can detect if the driver is drifting out of their lane, which is a common sign of drowsiness. 5. Physiological Indicat ors: Some advanced systems may include sensors to measure physiological parameters like heart rate, respiration rate, or brainwave activity to assess the driver's level of fatigue. Problem Analysis: The drowsy driver detection system addresses the critical issue of driver fatigue, which is a significant cause of road accidents worldwide. Fatigue can impair a driver's attention, reaction time, and decision - making abilities, leading to increased risks on the road. By detecting early signs of drowsiness, the s ystem can intervene and alert the driver, helping to prevent accidents and save lives. However, there are some challenges and limitations associated with drowsy driver detection systems: 1. Accuracy: Achieving high accuracy in detecting drowsiness is crucial to avoid false positives or false negatives. Factors like external lighting conditions, driver variability, and sensor limitations can affect the system's accuracy. 2. Real - Time Detection: The system needs to detect drowsiness in real - time to provide timely alerts. Delays or lags in detection can reduce the effectiveness of the system. 3. Individual Differences: Different individuals may exhibit varying signs of drowsiness, making it challengi ng to develop a universal detection algorithm that applies to al l drivers. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 13 4. Driver Acceptance: Some drivers may perceive the system as intrusive or annoying, leading to resistance or non - compliance. Ensuring user acceptance and addressing privacy concerns i s essential for the widespread adoption of such systems. 5. Environmental Factors: Environmental conditions, such as vibrations, noise, or extreme temperatures, can impact the system's performance and reliability. Addressing these challenges requires ongoing research, technological advancements, and user - centered design to develop more accurate, reliable, and user - friendly drowsy driver detection systems. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 14 3. Design and Methodology The aim of the drowsy driver detection system is to monitor the driver's state and alert them when signs of drowsiness or fatigue are detected. This system helps prevent accidents caused by driver fatigue and ensures road safety. Hardware Requirements: To implement the drowsy driver detection system, you would typicall y need the following hardware components: ▪ Camera: Used to capture the driver's face and eye movements. ▪ Infrared (IR) LED: Provides illumination for the camera in low - light conditions. ▪ Processor: Responsible for image processing and analysis. ▪ Alarm or Alert Mechanism: Provides a warning signal to the driver when drowsiness is detected. Specifications: ▪ GPU T4 * 2(GPU Memory: 0 Bytes, Max 14.8 GB) ▪ Hard Disk(4 GB, Max 73.1 GB) ▪ RAM(600.8 GB, Max 13 GB) Methodology: The drowsy driver detection system can be developed using the following steps: Step 1: Face and Eye Detection Utilize image processing techniques to detect and track the driver's face and eyes. Techniques such as Haar cascades, Viola - Jones algorithm, or deep learning - based methods like convolutional neural networks (CNNs) can be used for face and eye detection. Step 2: Eye State Classification Analyze the driver's eye region to determine the state of their eyes (e.g., open, close d, partially closed). This can be done by measuring aspects like eye aspect ratio (EAR), which calculates the ratio of eye landmarks (e.g., distance between eye corners and pupil). S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 15 Step 3: Drowsiness Detection Based on the eye state classification, implement an algorithm to detect drowsiness. For example, if the driver's eyes remain closed or partially closed for an extended period, it may indicate drowsiness. You can set a threshold for the duration or frequency of closed eyes to det ermine drowsiness. Step 4: Alert Mechanism When drowsiness is detected, activate the alert mechanism to notify the driver. This can be done using audible alarms, vibrations in the steering wheel or seat, visual alerts on the dashboard, or even voice prompt s. Step 5: Additional Features To enhance the system's accuracy, you can incorporate other indicators of drowsiness such as head position and movement, yawning detection using mouth region analysis, or audio cues like snoring or sudden braking sounds. Step 6: Real - time Monitoring and Feedback : Continuously monitor the driver's state in real - time and provide feedback. This can include displaying warnings on the vehicle's display or logging data for analysis and further improvements. Testing and Validation: T o ensure the effectiveness and reliability of the drowsy driver detection system, rigorous testing and validation should be performed. This includes collecting a diverse dataset of drivers in different driving conditions and evaluating the system's perform ance in terms of accuracy, sensitivity, specificity, and response time. Fine - tuning the algorithm and hardware setup may be necessary based on the testing results. Integration: Integrate the drowsy driver detection system into the vehicle's existing infras tructure. This may involve connecting with the vehicle's onboard computer system, integrating with other safety features like automatic braking systems, or incorporating the system into a standalone device that can be easily installed in various vehicles. User Interface: Develop a user - friendly interface to configure and monitor the system. This can include settings for sensitivity, alert types, and customization options. The interface can also provide real - time feedback on the driver's state and system per formance. Remember, the design and methodology may vary depending on the specific implementation and technology choices. It is crucial to consider the legal and ethical aspects of implementing su ch systems and comply with the relevant regulations and standards. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 16 Model Description: Here we are using two models. The first model helps in detection of facial features from a camera snap. Then based on the facial features, the localised images of the two eyes of the subject are extracted. Thi s is then fed to another model that performs classification. • HAARCASCADE FACIAL AND EYE LANDMARK DETECTION: Object Detection using Haar feature - based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. It is a machine lear ning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images. Here we will work with face detection. Initially, the algorithm needs a lot of positive images (images of faces) and negative images (images without faces) to train the classifier. Then we need to extract features from it. For this, Haar features shown in the below image are used. They are just like our convolutional kernel. Each feature is a single value obt ained by subtracting the sum of pixels under the white rectangle from sum of pixels under the black rectangle. Now, all possible sizes and locations of each kernel are used to calculate lots of features. (Just imagine how much computation it needs? Even a 24x24 window results in over 160000 features). For each feature calculation, we need to find the sum of the pixels under white and black rectangles. To solve this, they introduced the integral image. However large your image, it reduces the calculations for a given pixel to an operation involving just four pixels. Nice, isn't it? It makes things super - fast. But among all these features we calculated, most of them are irrelevant. For example, consider the image below. The top row shows two good features. The first feature selected seems to focus on the property that the region of the eyes is often darker than the region of S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 17 the nose and cheeks. The second feature selected relies on the property that the eyes are darker than the bridge of the nose. But the s ame windows applied to cheeks or any other place is irrelevant. So how do we select the best features out of 160000+ features? It is achieved by Adaboost. For this, we apply each and every feature on all the training images. For each feature, it finds the best threshold which will classify the faces to positive and negative. Obviously, there will be errors or misclassifications. We select the features with minimum error rate, which means they are the features that most accurately classify the face and non - face images. (The process is not as simple as this. Each image is given an equal weight in the beginning. After each classification, weights of misclassified images are increased. Then the same process is done. New error rates are calculated. Also new weights. The process is continued until the required accuracy or error rate is achieved or the required number of features are found). The final classifier is a weighted sum of these weak classifiers. It is called weak because it alone can't classify the image, but together with others forms a strong classifier. The paper says even 200 features provide detection with 95% accuracy. Their final setup had around 6000 features. (Imagine a reduction from 160000+ features to 6000 features. That is a big gain ). So now you take an image. Take each 24x24 window. Apply 6000 features to it. Check if it is face or not. Wow.. Isn't it a little inefficient and time consuming? Yes, it is. The authors have a good solution for that. In an image, most of the image is non - face region. So it is a better idea to have a simple method to check if a window is not a face region. If it is not, discard it in a single shot, and don't process it again. Instead, focus on regions where there can be a face. This way, we spend more time checking possible face regions. For this they introduced the concept of Cascade of Classifiers. Instead of applying all 6000 features on a window, the features are grouped into different stages of classifiers and applied one - by - one. (Normally the first fe w stages will contain very few features). If a window fails the first stage, discard it. We don't consider the remaining features on it. If it passes, apply the second stage of features and continue the process. The window which passes all stages is a face region. How is that plan! S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 18 The authors' detector had 6000+ features with 38 stages with 1, 10, 25, 25 and 50 features in the first five stages. (The two features in the above image are actually obtained as the best two features from Adaboost). According to the authors, on average 10 features out of 6000+ are evaluated per sub - window. • GOOGLE IMAGENET BASED CLASSIFIER: Google MobileNet is a family of lightweight convolutional neural network (CNN) models developed by Google. These models are designed to be eff icient and suitable for mobile and embedded devices with limited computational resources, where memory and processing power are often constrained. MobileNet models achieve their efficiency by utilising a technique called depthwise separable convolutions. T raditional convolutions perform both spatial and channel - wise filtering, which can be computationally expensive. In depth wise separable convolutions, the spatial filtering and channel - wise filtering are separated into two separate convolutional layers. Th e depthwise convolution applies a single filter per input channel, while the pointwise convolution performs a 1x1 convolution to mix the channels. This factorization significantly reduces the number of computations, making the models lightweight. The origi nal MobileNet model, called MobileNetV1, was introduced in 2017. Since then, several versions have been developed, including MobileNetV2 and MobileNetV3. MobileNetV2 introduced improvements such as inverted residual blocks and linear bottlenecks, which fur ther enhanced the model's efficiency and accuracy. MobileNetV3 incorporated additional optimizations, such as the use of squeeze - and - excitation modules and improved activation functions like Swish. Google MobileNet models have been widely adopted and used in various applications where resource - efficient deep learning models are required. They have been particularly popular for tasks such as image classification, object detection, and image segmentation on mobile devices, embedded systems, and edge devices. Their lightweight nature allows them to run efficiently on devices with limited computational capabilities while still achieving reasonably accurate results. Here are the main components of the MobileNet architecture: 1. Depthwise Separable Convolutions: MobileNet replaces traditional convolutions with depth wise separable convolutions. A depthwise convolution applies a single filter per input channel, performing spatial filtering. This is followed by a pointwise convolution that applies a 1x 1 filter to combine the channels. By separating the spatial and channel - wise filtering, MobileNet significantly reduces the number of computations. S t . Th o m as’ C o l l eg e of E n g i n eer i n g a nd T e ch n o l ogy De p a r t m e n t o f I n f o r m a t i o n T e c h n olo g y P a g e 19 2. Inverted Residuals (MobileNetV2): MobileNetV2 introduced inverted residual blocks to further improve efficiency. In an inverted residual block, the input is first expanded to a higher - dimensional feature space using a 1x1 convolution, then depth wise separable convolutions are applied, and finally, a 1x1 convolution is used to reduce the dimensions back t o the original. This design allows the model to capture complex features while keeping the number of parameters and computations low. 3. Linear Bottlenecks (MobileNetV2): MobileNetV2 employs linear bottlenecks to reduce the model's reliance on non - linear activation functions. A linear bottleneck uses a linear activation function (