Automated Cheque Processing Through Data Verification and Siamese Networks Anil Muthigi1, Ashutosh Kumar1, Gaurav Bhagchandani1, Vijay Nath2 [email protected],[email protected],gauravbhagchandani51@gmai l.com, [email protected] 1.Department of CSE, B.I.T. Mesra, Ranchi-835215(JH), India 2.Department of ECE, B.I.T. Mesra, Ranchi-835215(JH), India Abstract- Bank Cheques are still majorly used for different fields in the cheque. To avoid extracting financial transactions all over the world. A large number of cheques are processed manually on a daily basis, thereby unnecessary details we can convert the extracted requiring a lot of time, money and human effort. In such a text into greyscale format and use other manual verification, information like date, signature, techniques like erosion - dilation to enhance the amount present on the cheque has to be physically verified. quality of extracted information. This paper aims at finding a solution for processing cheques which increases the efficiency of this process, while minimizing human intervention. The service was hosted locally on a webpage, we first accepted the cheque 2.2 Extracting fields from their regions image from the user, passed it to Open CV which returned the various parts of the cheque which were then passed to Google Vision API to convert them into text, but MICR As demonstrated in Fig-1 the cheque image can be code was passed to tesseract OCR. After the successful broken down into different bounding boxes based extraction, the details were verified from the information on their relative positions in the cheque to extract in the SQLite database and signature was verified using the model trained in Jupyter Notebook. various fields from the cheque image. This process is called image slicing. 1. Introduction Due to security and trust issues, paper cheques 2.3 Extracting MICR code are still estimated to play a big role in financial transactions worldwide. For clearance, the cheque For Indian banks, there is a common pattern of the is first converted into its digital form and then location of MICR (Magnetic Ink Character passed onto the cheque clearing unit for further Recognition) code in the bank cheque. The MICR processing of the cheque, which involves visual code is always found to be present in the lower verification of all the details and digital transfer of 10% of the cheque image. The MICR code is cheque details between banks for amount written using a special font which can only be confirmation and for validating the amount extracted from an optical character recognition transfer. model specifically trained to read that font. 2. Extracting cheque details Example of one such optical character recognition model is the Tesseract OCR. The extracted MICR Different bank cheques have different size, shape code is divided into 4 parts. The first part of the and different relative position of the fields in a code is the cheque number. The second part of the cheque. A different searching region bounded by code is divided into three parts - the first part coordinates can be determined for every field in denotes the city name; the second part denotes the cheque which can then be extracted from their the bank code to distinguish between different regions after being grayscaled. banks in India and the third part is the branch code which distinguishes different branches of the 2.1 Noise reduction same bank. The third part of the code is the RBI (Reserve bank of India) code and the fourth part Different cheques have a different background denotes the transaction number. Thus, we can see styles which thereby bring certain unwanted that extracting the MICR code will actually help us errors (“noise”) in the extraction of text from the uniquely identify the payer’s bank details. Fig-1: Extracting cheque details from an Indian Bank cheque not. If the given signature seems to be similar to the one given in the database it is considered to 3. Data Verification be genuine. Hence, it is obvious that there will be certain human bias involved in what is considered After extracting the data, we firstly need to verify the extracted cheque details for cheque validity. to be genuine and what is not, thus making this a The payee’s name field on a cheque denotes the very non-uniform process. name of the person to whom the money is to be paid through that cheque. The payee-name field 3.2.1 Siamese Networks on a cheque is used to write the name of the As shown in Fig-2, Siamese Networks (also called person to whom the money is paid through that as twin-networks) is a special type of CNN cheque. (Convolutional Neural Network) model in which two or more inputs are encoded into vector 3.1 Verifying cheque details embeddings and their distance is computed. The contrastive loss function takes in the output of this In India, payee name is located above the amount network and treats them as vectors pointing in a field as shown in Fig-1. For a cheque to be multi-dimensional vector space. processed without any errors the payee’s name should match with the given account number, the account number on the cheque should match with that of the payer, the signature of the payer should match with that given in the database and the payer should have enough amount in his/her balance in order to complete the transaction. In India, cheques cannot be processed after 3 The loss function computes its loss in a way so as months of the date written on the cheque, hence to minimize the distance between similar verifying the date of when the cheque was issued (positive) samples and maximize the distance also has its importance. between dissimilar (negative) samples. 3.2 Signature Verification 3.2.2 Contrastive loss This is not only the most important step in the The mathematical expression of the contrastive cheque processing industry but also the most loss function is as follows: sensitive out of all the cheque details. The thing that makes it so sensitive is that there is no exact Contrastive loss is a Distance-based Loss function measure to quantify if a signature is genuine or (which can be either cosine distance or Euclidian Fig-2: Comparing the similarity of two signatures distance). It tries to ensure that semantically much appreciated. An automated cheque similar examples are embedded close together. processing system can therefore be introduced The original sample has a margin(m) around its into the process only if it offers reliability. With vector space. This loss function is mathematically advanced technologies and further research, defined to push the negative samples outside of accuracy of the model can be further enhanced the neighborhood by a margin while keeping and the cheque clearance process can be truly positive samples within the neighborhood. automated and put to industrial use. There is no denying the fact that achieving human-level 4. Comparison with Current state of the art accuracy and understanding is yet to be achieved, models on CEDAR Dataset but it is definitely a step in the right direction. State of the #Signatu Accura FAR FRR 6. References art models res cy 1. Bromley, Jane, Bentz, James W, Bottou, Le ́on, Word Shape 55 78.50 19.5 22.4 Guyon, Isabelle, LeCun, Yann, Moore, Cliff, Sa ̈ckinger, Ed- uard, and Shah, Roopak. Signature verification 0 5 using a siamese time delay neural network. International Jour- nal of Pattern Recognition and Graph 55 92.10 8.20 7.70 Artificial Intelligence, 7 (04):669–688, 1993. Matching 2. Supervised Contrastive Learning by P. Khosla, P. Teterwak, C. Wang et al, 2020 Zernike 55 83.60 16.3 16.6 3. R.C.Gonzales, R.E.Woods, “Digital Image Processing”, Moments 0 0 2-nd Edition, Prentice Hall, 2002. 4. Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Surrounded 55 91.67 8.33 8.33 Yang. Large-margin softmax loss for convolutional ness neural networks, 2016. features 5. G.V.Tcheslavski, “Morphological Image Processing: Grayscale morphology”, ELEN 4304/5365 DIP, Spring Signet 55 86.5 13.7 13.1 2010. Model 0 5 (Convolutio 6. Lee, Chen-Yu, Gallagher, Patrick W, and Tu, Zhuowen. Gen- eralizing pooling functions in convolutional neural nal Siamese networks: Mixed, gated, and tree. In AISTATS, 2016. Networks 7. Krizhevsky, Alex. Learning multiple layers of features from tiny images. Technical Report, 2009. 8. Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, 5. Conclusion Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Van- houcke, Vincent, and Rabinovich, When we are dealing with issues as sensitive as Andrew. Going deeper with convolutions. In CVPR, 2015. money, experimentations and risks are not very 9. Zeiler, Matthew D and Fergus, Rob. Stochastic 16. Pramkeaw, P. (2016). The study analysis knee angle pooling for regularization of deep convolutional neural of color set detection using image processing networks. arXiv preprint arXiv:1301.3557, 2013. technique. In 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems 10.Zeiler,M.D., Fergus,R.:Visualizing and understanding (SITIS) (pp. 657–660). Naples, 2016. convolutional networks. In: Computer Vision–ECCV https://doi.org/10.1109/sitis.2016.109. 2014, pp. 818–833. Springer (2014) 17. Dhanawade, A., Drode, A., Johnson, G., Rao A., & 11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet Upadhya, S. (2020). OpenCV based information classification with deep convolutional neural networks. extraction from cheques. In 2020 Fourth International In: Advances in neural information processing systems. Conference on Computing Methodologies and pp. 1097–1105 (2012) Communication (ICCMC) (pp. 93–97). Erode, India, 2020, 12. Ji, S., Xu, W., Yang, M., Yu, K.: 3d convolutional https://doi.org/10.1109/iccmc48092.2020.iccmc- neural networks for human action recognition. Pattern 00018. Analysis and Machine Intelligence, IEEE Transactions on 35(1), 221–231 (2013) 18. V. Kumar, P. Kaware, P. Singh, R. Sonkusare and S. Kumar, "Extraction of information from bill receipts 13. Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5- using optical character recognition," 2020 International rmsprop: Divide the gradient by a running average of its Conference on Smart Electronics and Communication recent magnitude. COURSERA: Neural networks for (ICOSEC), 2020, pp. 72-77, doi: machine learning, 4(2):26–31, 2012. 10.1109/ICOSEC49089.2020.9215246. 14. Cires ̧an, D.C., Meier, U., Gambardella, L.M., 19. R. R. Palekar, S. U. Parab, D. P. Parikh and V. N. Schmidhuber, J.: Convolutional neural network Kamble, "Real time license plate detection using committees for handwritten character classification. In: openCV and tesseract," 2017 International Conference Document Analysis and Recognition (ICDAR), 2011 on Communication and Signal Processing (ICCSP), 2017, International Conference on. pp. 1135–1139. IEEE pp. 2111-2115, doi: 10.1109/ICCSP.2017.8286778. (2011) 20. H. Sidhwa, S. Kulshrestha, S. Malhotra and S. 15. Thakur, A., Prakash, A., Mishra, A. K., Goldar, A., & Virmani, "Text Extraction from Bills and Invoices," 2018 Sonkar, A. (2020). Facial recognition with OpenCV. In International Conference on Advances in Computing, Smys, S., Tavares, J., Balas, V., & Iliyasu A. (Eds.), Communication Control and Networking (ICACCCN), Computational Vision and Bio-Inspired Computing. 2018, pp. 564-568, doi: ICCVBIC 2019. Advances in Intelligent Systems and 10.1109/ICACCCN.2018.8748309. Computing (vol 1108). Cham: Springer. https://doi.org/10.1007/978-3-030-37218-7_24.
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-