INTERVIEW Good afternoon, Mr. Tran Ngoc Dam. I suppose that you have accepted my portfolio, that’s why I am here and I really appreciate it. Anyway, I shall go briefly about myself again so we should get started now. I’ve just graduated from the University of Technology & Education with a major in mechatronics, I believe I don’t have to explain much about it, you can see I awarded a degree with distinction. Although I don’t not have much experience since I’m just a fresh engineer, still I used to participate in a few academic competitions in my university such as ... As far as I know, your company is seeking for interns that have some specific knowledge which was listed in the job description, I may not be an expert in all of these fields but I belie ve that the working environment here will help. Up to now, I have studied quite deep in sensors and actuators as well as systems control. The only drawback about me is I can barely work in such an unfamiliar environment, but it won’t matter if I get used w ith how things are operated here. So I guess that’s all about me, I hope you will consider the application due to my suitabilty for this job. Have a nice day ! (The above script might be different from what I was saying in the video) TITLE & ABSTRACT 1. DESIGNING A CAMERA SUPPORT DEVICE USING IN FILM STUDIOS (Lastest mechanical project) Abstract : Recent demands of producing a smoother video footage of film crews, freelancers,.. has been raising and shows no signs of slowing down , that’s why this device was designed and launched to prevent shaky footage or similar filming issues. Besides the basic principle of most available camera sliders on the market, we concentrate d more on the control program with an advanced algorithm . Hence , the device brings more accurate stability so that the high quality is always remained. 2. RESEARCHING & IMITATING A CARTESIAN FUSED DEPOSITION MODELING 3 - DIMENSION AL PRINTER (DIY project) Abstract : Although additive manufactoring is no longer a new technology, still due to studying purpose, a researching about how this printing method works as well as a practical printer has been done. The design was based on the simplicity , extensibility and compactness of a specific popular coordinate – Cartesian. The gradual building process brought the explicit connection between mechanical components and the electricity system. Through that, I thoroughly understood the operation, optimized all of the parameters in the firmware, slicer software and also the calib ration in order to have the best eventual printing model. 3. THE IMPACT OF MUSIC ON HUMAN DEVELOPMENT AND WELL - BEING (Online reference ) Abstract : Music is claimed to be one of the most universal ways to express and communicate of all ages from all cultures in this world. Research made by anthropologists suggests that music has been a characteristic of the human condition for millennia. Listening, singing, composing or improvising no matter individually or collectively, are common activities for the majority of people. Music seems to be an enjoyable activity when talking about, but the way it influences is beyond basic amusement. I didn’t choose a project for this 3 rd one because I suppose a social theme would be more delightfully deba ted and since I had already written about 2 of my projects. The three below paragraphs discuss the PRACTICAL RESULTS of those Title - Abstract above (except the 3 rd paragraph) PARAGRAPH 1 Through plenty of experiments, shooting different footages on various topographies , the result was 80% as expected. The stepper motor was changed from the dimension of 42x34 of a Nema 42 to 42x38 due to the lack of torque. The initial maximum weight was 2 kilograms but the openbuild pad need to move at high acceleration so the change is necessary. Physical endstop was also replaced with a n optical one to minimize physical impact . Also the number of the them was reduce d to one since the code had been fixed. Hence, u sers won’t be able to move the camera exceedi ng the maximum length of the slider thanks to the prevention in code logic Eventually, w e decided to use square linear guide rail instead of Vslot wheels since the high accuracy of rail is the most determining factor in the quality of footage s Apparently , control program will be the key factor that determines the success of the product depending on advanced algorithm, which was fully developed. PARAGRAPH 2 To ensure the stable quality of a self - building printer, this FDM module was test on printing an articulated model of a moveable octopus (you can check it out at a certain timestamp on my youtube channel – Clever Dung) without support A dhesion was also unnecessary since the advent of PEI magnetic build plate Traditional PLA was used due to its easy - to - use, tho the sturdiness was not guaranteed as ABS , final product was acceptable. On the other hand, increasing layer height without adjusting the temperature led to gaps between layers. Another phenomenon when printing at high speed is layer shift, whi ch also broke the model’s smoothness. In addition, adding an external mosfet made the current more stable, avoided the heat loss of the heated so that it woulnd’t affect the operation of the power supply IC. PARAGRAPH 3 Music benefits are reported in studies of students in college as well as amteur musicians. It appears to a common accompaniment to exercise, whether in the park, café or restaurant. People at even a very young ages began to explore the potential physical benefits of synchronous exercise to music, especially in hot and humid conditions. Comparisons between music and sport are often evidenced in the body related to performance and group behaviors. A more formal link between music and movement is the concentration, obviously. In a word, th ose above studies demonstrate that engaging in musical activity makes positive impact on health and well - being in various ways and in a diverse range of contexts during the lifespan. Musical activities, whether being creative or re - creative, are infused w ith the potential to be therapeutic, developmental and educational, with the provided that such musical experiences are perceived to be engaging, meaningful and successful by those who participate. FREESTYLE Recently it seems that the pandemic has par tly affected my personal life, or anyone else's, I think. How come? Online studying is not a big deal, mostly because I had more time to think about myself, but not in a very positive way. It sounds a bit confusing, but let me get this straight. Taking a few days off from school might let students’ hair down, but a few months to one year, things just ain’t the same. I started to interact less, prolonged sleeps confused me between delusion and reality. Playing music didn’t ease me anymore as some certain re asons came up, also my electric guitar got some broken strings. Whenever I tried to brainstorm to solve a math problem, my mind went blank, and the same thing happened while I was trying to communicate or explain an issue. The thing was, I began to realise everyone’s vices through their behaviors, from my family members, my friends to myself day by day, like there was someone behind reminding me of even the tiniest ones. But you need not worry about me, as soon as I can meet someone, things will get back to normal. Piece. © 2021 Albright Abu Edet, Samaila Umaru and Isah Ibrahim This open access ar ticle is distributed under a Creative Commons Attribution (CC - BY) 4.0 license. Journal of Mechatronics and Robotics Original Research Pa per On the Design and Construction of a Dual Axis Solar Tracker Prototype for a Dish Concentrator using ATMega3298P Microcontroller 1 A lbright A bu Edet, 2 S amaila Umaru and 3 I sah Ibrahim 1 Department of Mechanical Engineering, Ahmadu Be llo University, Zaria, Nigeria 2 Air Force Institute of Technology, Kaduna, Nigeria 3 Department of Mechanical Engineering, Ahmadu Bello University, Zaria, Nigeria Article history Received : 04 - 0 2 - 202 1 Revised : 23 - 0 3 - 202 1 Accepted : 2 8 - 0 3 - 202 1 Corresponding Author : Albright Abu Edet Department of Mechanical Engineering, Ahmadu Bello University, Zaria, Nigeria Email : albrightedet@gmail.com Abstract : The desire for reliable power supply is the reason for further research into alternative sources of power. Altho ugh solar tracking is not a new technology, yet solar harvesters still suffer low efficiency due to the intermittence of solar insolation. Thus many smart systems have been designed to maximise solar harvesters. Among the systems is the dual axis solar tra cking system. For demonstration of the tracking system, a prototype was developed as a model for a conventional tracking system. It met its objective of tracking solar irradiance and accordingly re - orient the payload in real time to the point of maximum so lar insolation. Testing and observation using the developed prototype gave evidence to the fact that solar trackers can increase the efficiency of solar harvesters. The results showed steady solar tracking for 6 h starting from 9 : 00 am - 3 : 15 pm. The sensiti vity of the sensors allow the system to track solar insolation as low as >5 lumen. The entire system was power ed by 5 volts which made it energy efficient and can be run at low cost. Keywords : ATMega328P, AVR, Micro - Controller, Arduino, Prototype, Solar, Tracking System, Dish Concentrator, C Program min g Language, Systems, Mechatronic Introduction Solar energy is a clean source of energy that is free from environmental pollution. It is one of the alternative energy sources that has vast potential. Harnes sing this energy follows different methods depending on the need, whether for electricity or heating using solar collectors. Many projects have been designed and constructed to utilize solar energy and all have pointed to the importance of solar trackers i n order to increase the efficiency of the system. A solar tracker is a mechatronic system designed to follow the movement of the sun. There are basically two types of solar trackers namely; A Single axis solar tracker and a Dual axis solar tracker ( Hafez e t al ., 2018) . The single axis solar tracker is one that is fixed at an angle equal to the latitude of the location where it is installed and is capable of following the sun on one axis only from East to West throughout the day by way of a rotating mechanis m. While the dual axis solar tracker is another setup quite similar to that of the single - axis tracker. The only difference is that an extra degree of freedom is added to the system, in that the tracker can rotate on a dual axis, that is “East to West” and “North to South.” Orientation ( Lee and Rahim , 2013 ) Description of the Developed Solar Tracker The prototype developed was cut out from per spex as shown in Fig 1 Per spex was used due to ease of fabrication. The lower arm being the primary axis module m oves from North - South ( 0 - 180° ) with the help of a servo motor, while the up per arm being the sec ondary module moves from East - West ( 0 - 180° ) by a servo motor. A combination of both movement results in a dual axis movement. The payload being the dish concent rator was cut out of plastic plate and covered with a reflector. A sensor pyramid like that of ( Mareeswari et al ., 2019) as shown in Fig 2 was also designed using four Light dependent resistors, all separated with a cross bar which was designed to cast a shadow on the sensors and thus vary the lu min ous intensity of the sun on the sensors. Albright Abu Edet et al / Journal of Mechatronics and Robotics 2020, Volume 5 : 18 22 10.3844/jmrsp.202 1 18 22 19 Fig . 1 : A picture of the prototype Fig 2 : Sensor pyramid showing four sensors each representing ( North, South, East and West ) Working Principle of the Tr acker The solar tracker was designed such that at sunrise the sensors ( LDR ) will receive the light reflected by the rising sun and then send electronic pulses to the micro - controller which in - turn drives the servo motors to actualize a position of facing t he sun. From then on the tracking system will follow the sun as it crosses from the ea st to the west which is about 0 - 180° from the eastern horizon to the western horizon. As the tracker moves with the sun, it will align the concentrator to face the sun ra ys directly, thus receiving a huge amount of sunlight for a longer per iod of time. With the load in position, achieving a correct position of the concentrator is very important in the entire design a nd at night when there is no sun, the tracker is programe d to return to 0° ( facing East ) position which is its position at sunrise in order to give the sensors a horizontal view of the rising sun. Materials The materials used for the project were : 1. 10,000 Ω Resistors Receiver Dish concentrator Servo motor Support arms Compartment for electric circuit Base Albright Abu Edet et al / Journal of Mechatronics and Robotics 2020, Volume 5 : 18 22 10.3844/jmrsp.202 1 18 22 20 2. Vero board/bread board 3. Electronic sensors ( LDR : Light dependent resistors ) 4. Micro controller : ATmega328P 5. 9 g Servo motors 6. Per spex 7. Jum per wires Methodology The following methods were adopted in the development of the prototype (Fig . 3 ) : 1. Use of solid works Integrated development en vironment for computer aided design of the prototype 2. Design of the electronic circuit 3. Selection of materials for the prototype hardware 4. Fabrication of parts 5. Development of the electronic circuit 6. Development of t he flow chart and code using C program min g language 7. Interfacing Hardware and firmware components 8. Testing Testing Procedure The test was conducted for 15 min from ( 6 : 00 - 6 : 15 am ) , ( 9 : 00 - 9 : 15 am ) , ( 12 : 00 - 12 : 15 pm ) , ( 3 : 00 - 3 : 15 pm ) and ( 6 : 00 - 6 : 15 pm ) For ever y one hour the sun would move by 15° across the sky ( Thomas and Stephen , 2014 ) Therefore : If 15° = 1 h ( which is 60 min) Then 3° = 15 min The tracker was programmed to track the sun automatically once the lu min ous intensity was greater than five lumen a nd the output displayed on the serial monitor at 3 min intervals. The tracker was placed facing the south with an azimuth of 11°. Flow Chart The tracker flow chart as in Fig . 4 shows the steps taken by the system to execute its function. Each step is execu ted line by line starting from the top where solar irradiance is received by the light dependent resistors and converted to electric signal and fed to the micro - controller which per forms logical comparisons of the signal and then transmits to the actuators if the difference is logically true and if the difference is logically false, the process goes back again to the top and repeats the steps. Fig 3 : The developed prototype Fig 4 : Flow chart and code Start All 4 sensors receive light Signal is stored for comparison and processing Signal is processed Decision is taken if signal is true = Y if signal is false = N Correspondence is sent to the actuators N Y End Albright Abu Edet et al / Journal of Mechatronics and Robotics 2020, Volume 5 : 18 22 10.3844/jmrsp.202 1 18 22 21 Discussion From the results on Table 1, the LDR ( North ) received the highest solar irradiance because the sensor was facing the rising sun at a wider angle , more than the other 3 sensors. 9 : 00 am - 3 : 12 pm shows a steady tracking of the sun at almost 1000 lumen. It also shows how much energy that can be harnessed from directly facing the payload to the sun as tracking continues. The high sensitivity of the sensors allows the system to start tracking lu min ous intensity early , as low as >5 lumen and later beyond 6 : 00 pm until the irradiance goes les s than 5 lumen as in Fig 5 When the irradiance is less than 5 lumen the system returns to home position to await the rising sun. Table 1 : Solar Irradiance over time from ( 6 : 00 am - 6 : 15 pm ) as captured by each sensor Time LDR ( North ) LDR ( South ) LDR ( East ) LDR ( West ) 6 : 00 AM 8 9 10 6 6 : 03 AM 14 22 23 11 6 : 06 AM 37 40 43 27 6 : 09 AM 72 69 67 55 6 : 12 AM 123 116 103 100 6 : 15 AM 191 192 163 163 9 : 00 AM 933 964 979 925 9 : 03 AM 937 977 981 930 9 : 06 AM 950 990 983 939 9 : 09 AM 967 992 984 948 9 : 12 AM 979 989 984 95 7 9 : 15 AM 982 992 985 972 12 : 00 PM 965 972 945 947 12 : 03 PM 987 987 970 971 12 : 06 PM 988 987 970 970 12 : 09 PM 986 986 968 969 12 : 12 PM 988 986 971 971 12 : 15 PM 987 986 970 970 3 : 00 PM 982 982 968 969 3 : 03 PM 983 979 971 963 3 : 06 PM 982 982 970 969 3 : 09 PM 981 981 967 968 3 : 12 PM 960 964 940 940 3 : 15 PM 758 733 762 628 6 : 00 PM 765 785 716 720 6 : 03 PM 741 756 688 687 6 : 06 PM 720 738 664 667 6 : 09 PM 693 710 633 636 6 : 12 PM 662 671 599 594 6 : 15 PM 103 219 60 198 Fig 5 : Solar Irradiance against time ( 6 : 00 am - 6 : 15 pm ) Irradiance against Time (6:00 am - 6:15 pm) 1200 1000 800 600 400 200 0 6:00 AM 6:06 AM 6:12 AM 9:00 AM 9:06 AM 9:12 AM 12:00 PM 12:06 PM 12:12 PM 3:00 PM 3:06 PM 3:12 PM 6:00 PM 6:06 PM 6:12 PM LRD (North) LRD (South) LRD (East) LRD (West) Irradiance (lumen) Time (min) Albright Abu Edet et al / Journal of Mechatronics and Robotics 2020, Volume 5 : 18 22 10.3844/jmrsp.202 1 18 22 22 Conclusion The solar tracker prototype developed can be easily adopted and deployed anywhere looking at the simplicity of its design. It met its objective of tracking solar irradiance and accordingly re - orient the payload in real time to the point of maximum solar insolation. Testing and observation using the developed prototype as shown in Fig 3 gave evidence to the fact that solar trackers can increase the efficiency of solar harvesters. The entire system was power by 5 volts which made it energy efficient and can be run at low cost. Author Contributions All authors equally contributed in this work. Ethics This article is original and contains unpublished material. The corresponding author confirms that all of the other authors have read and approved the manuscript and no ethical issues involved. References Hafez, A. Z., Yousef, A. M., & Harag, N. M. ( 2018 ) Solar tracking systems : Technologies and trackers drive types - A review. Renewable and Sustainable Energy Reviews, 91, 754 - 782. https : //doi.org/10.1016/j.rser.2018.03.094 Lee, J. F., & Rahim, N. A. ( 2013, November ) Per formance comparison of dual - axis solar tracker vs static solar system in Malaysia. In 2013 IEEE Conference on Clean Energy and Technology ( CEAT ) ( pp. 102 - 107 ) IEEE. https : //doi.org/10.1109/CEAT.2013.6775608 Mareeswari , R., Tharani, S., Niveditha , G., & Nithya , T. ( 2019 ) . Solar tracking system a desideratum using LDR and microcontroller package. International Journal of Advanced Research ( IJAR ) , 7 ( 11 ) , 254 - 259. https : //doi.org/10.21474/IJAR01/10000 Thomas , T. A., & Stephen E. S. ( 2014 ) Explorations : An introduction to Astronomy https : //www4.uwsp.edu/physastr/kmenning/Astr311 /Lect02.pdf Abbreviation and Units Abbreviation Units AVR Degree LDR Ω Ohm A SURVEY ON DIGITAL CAMERA IMAGE FORENSIC METHODS Tran Van Lanh a , Kai-Sen Chong b, Sabu Emmanuel b, Mohan S Kankanhalli c a Department of Computer Science, Uppsala University, Sweden b School of Computer Engineering, Nanyang Technological University, Singapore c School of Computing, National University of Singapore, Singapore latr0465@student.uu.se, {Y030028, asemmanuel}@ntu.edu.sg, mohan@comp.nus.edu.sg ABSTRACT There are two main interests in Image Forensics, namely source identification and forgery detection. In this paper, we first briefly provide an introduction to the major processing stages inside a digital camera and then review several methods for source digital camera identification and forgery detection. Existing methods for source identification explore the various processing stages inside a digital camera to derive the clues for distinguishing the source cameras while forgery detection checks for inconsistencies in image quality or for presence of certain characteristics as evidence of tampering. 1. INTRODUCTION Multimedia Forensics has become important in the last few years. There are two main interests, namely source identification and forgery detection. Source identification focuses on identifying the source digital devices (cameras, mobile phones, camcorders, etc) using the media produced by them, while forgery detection attempts to discover evidence of tampering by assessing the authenticity of the digital media (audio clips, video clips, images, etc). In this paper, we review several techniques in digital camera image forensics, i.e. in source camera identification and in forgery detection. Source camera identification explore different processing stages of the digital camera for unique characteristics and exploit the presence of lens radial distortion [1], sensor imperfections [2], [3], color filter array (CFA) interpolation [4], [5], [6], and inherent image features [7], etc. Image forgery includes splicing of images to construct a new concocted image, applying region duplication/swapping to hide/relocate certain objects in the image and applying image editing to remove/add new objects from/into the image. For forgery detection, some of the methods inspect the image for inconsistencies in chromatic aberration [11], lighting [12], and camera response function (CRF) [13] as signs of forgery. Others try to detect certain modes of manipulation using JPEG quantization tables [10], bicoherence [14], and robust matching [15]. In section 2, we give an overview on the structure and processing stages of a typical digital camera. In sections 3 and 4, several methods for source digital camera identification and forgery detection are presented respectively and section 5 concludes the paper. Figure 1 . Elements of a typical digital camera 2. INSIDE A DIGITAL CAMERA The general structure of a digital camera is shown in Figure 1. Digital cameras consist of lens system, filters, color filter array (CFA), image sensor, and digital image processor (DIP) [9]. Color images may suffer from aberrations caused by the lenses, such as chromatic aberration and spherical aberration. Chromatic aberration is the failure to converge different wavelengths at the same position on the sensor, while spherical aberration causes light passing through the periphery of the spherical lens to converge at a point closer to the lens than light passing through the lens center. In the lens systems, these effects can be minimized using special combinations of convex and concave lenses, as well as using aspheric lenses. The lens system also includes the auto-exposure control, auto-focus control and the image stabilization unit. Auto- exposure changes the aperture and the shutter speed along with a carefully calibrated automatic gain controller to capture well- exposed images. Auto-focus runs a miniature motor that focuses the lenses by moving the lenses in and out until the sharpest possible image of the subject is obtained. Image stabilization helps to give sharper pictures by counteracting camera shake. After passing through the lenses, light goes through a set of filters. An infrared filter is an absorptive or reflective filter allowing only the visible part of the spectrum to pass, while blocking infrared radiation that can decrease the sharpness of the formed image. An anti-aliasing filter reduces aliasing, a phenomenon that happens when the spacing between pixels in the sensor cannot support the finer spatial frequency of the target objects such as decorative patterns. At the heart of a digital camera is the image sensor. An image sensor is an array of rows and columns of photodiode elements, or pixels. When light strikes the pixel array, each pixel generates an analog signal proportional to the intensity of light, which is then converted to digital signal and processed by the DIP. Most digital cameras use a charge-coupled device (CCD) as the image sensor although CMOS chips are a popular alternative. Sensor pixels are not sensitive to colors; they just record the brightness of light, thus producing a monochromatic output. To produce a color image, a color filter array (CFA) is used in front of the sensor so that each pixel records the light intensity for a single color only. Most digital cameras use the Green-Red-Green-Blue (GRGB) Bayer pattern CFA. The output from the sensor with a Bayer filter is a mosaic of red, green and blue pixels of different intensities. Since each pixel record only one of the three colors, the full color image is accomplished by the DIP using various interpolation (demosaicking) algorithms. Other alternative CFA filters include the Cyan-Yellow-Green-Magenta (CYGM) pattern, the Red- Green-Blue-Emerald (RGBE) pattern, and the Cyan-Magenta- Yellow (CMY) pattern. Besides interpolation, the DIP also performs further processing such as white balancing, noise reduction, matrix manipulation, image sharpening, aperture correction, and gamma correction to produce a good quality image. 3. SOURCE DIGITAL CAMERA IDENTIFICATION 3.1. Using Lens Aberration Choi et al [1] propose the lens radial distortion as a fingerprint to identify source camera. Radial distortion causes straight lines to appear as curved lines on the output images and it occurs when the transverse magnification M T (ratio of the image distance to the object distance) is not a constant but a function of the off-axis image distance r . The authors argue that different manufacturers employ different lens system design to compensate for radial distortion and that the lens focal length affects the degree of radial distortion. Thus, each camera model will express a unique radial distortion pattern that helps to identify it. Two experiments were performed on 3 different camera models obtaining average classification accuracies of 91.53% and 91.39% respectively. Although this method is not tested for two cameras of the same model, based on the authors’ arguments on radial distortion differences, we can expect a low accuracy. Additionally, this method will fail to measure radial distortion if there is no straight line in the image since the distortion is measured using the straight line method. Lastly, the authors assume that the centre of distortion is the centre of image, which may not be the case. If this is taken into account, a higher accuracy may be possible. 3.2. Using Sensor Imperfections Pixel Defects : Geradts et al [2] examine the defects of CCD pixels and use them to match target images to source digital camera. Pixel defects include point defects, hot point defects, dead pixel, pixel traps, and cluster defects. To find the defect pixels, a couple of images with black background are taken by each of the 12 cameras tested and compared to count the common defect points that appear as white. The result shows that each camera has distinct pattern of defect pixels. However, it is also shown that the number of visible defect pixels for the same camera differs between the images and depends very much on the content of the image. It is also shown that the number of defect pixels visible on images of the same content taken by the same camera at different temperatures is different. Furthermore, for cameras with high-end CCD, the authors cannot find any visible defect pixel, which means that not all cameras necessarily have pixel defect. In addition, most cameras have built-in mechanisms to compensate for the defect pixels. Therefore, the method cannot be applied confidently for all digital cameras. Sensor Pattern Noise : A reliable method for identifying source camera based on sensor pattern noise is proposed by Lukas et al in [3]. The pixel non-uniformity (PNU), where different pixels have different light sensitivities due to imperfections in sensor manufacturing processes, is a major source of pattern noise. This makes PNU a natural feature for uniquely identifying sensors. The authors study 9 camera models where 2 of them have similar CCD and 2 are exactly the same model. The camera identification is 100% accurate even for cameras of the same model. The result is also good for identifying compressed images. One problem with the conducted experiments is that the authors use the same image set to calculate both the camera reference pattern and the correlations for the images. We have run several experiments with this model for cropped images. It turns out that the model fails to predict the source camera of cropped images. In addition, for the model to work, the size of the images used for computing the camera reference pattern should be the same as the size of the test image. 3.3. Using CFA Interpolation Traces of Color Interpolation in Color Bands : Bayram et al [4] explore the CFA interpolation process to determine the correlation structure present in each color band which can be used for image classification. The main assumption is that the interpolation algorithm and the design of the CFA filter pattern of each manufacturer (or even each camera model) are somewhat different from others, which will result in distinguishable correlation structures in the captured images. Using the iterative Expectation Maximization (EM) algorithm, 2 sets of features are obtained for classification: the interpolation coefficients from the images and the peak location and magnitudes in the frequency spectrum of the probability maps. When using a 5x5 interpolation kernel, the classification accuracy is 95.71% for two different cameras but it drops to 83.33% when three cameras are compared. A larger set of cameras should have been used to determine its effect on the classification accuracy. No experiment is run for cameras of the same model but we expect the method to fail because cameras of the same model normally share the same CFA filter pattern and interpolation algorithm. In addition, the authors have pointed out that this method does not work well for compressed images. Quadratic Pixel Correlation Model : Long and Huang [5] obtain a coefficient matrix from a quadratic pixel correlation model where spatially periodic inter-pixel correlation follows a quadratic form. Four cameras together with cartoon pictures are used for the experiments, which obtain 95% accuracy for one camera, 98% for another camera and 100% accuracy each for the remaining two cameras. The authors also test with modified images (compressing, adding Gaussian noise, gamma correction, smoothing). When compressed with quality 80, the accuracy drops to as low as 80%. Accuracy for images with other modifications is even lower. Since cameras of the same or similar model would use the same demosaicking algorithm, we expect that the model will not correctly differentiate cameras of the same model. Furthermore, as shown by the experiments, the model performs poorly for modified images. Other than that, the model gives a very good performance. Binary Similarity Measures : Celiktutan et al [6] use a set of binary similarity measures for identifying source cell-phone. The underlying assumption is that proprietary CFA interpolation algorithm leaves correlations across adjacent bit-planes of an image that can be represented by these measures. Binary similarity measures are metrics used to measure the similarity between binary images, i.e. between the bit-planes of an image. 108 binary similarity measures are obtained, and like [7], a set of 10 Image Quality Metrics is used as additional features for classification. The highest average accuracy for classifying 3 groups of cameras is 98.7%, while the lowest average accuracy is 81.3%. When classifying 9 cameras, only 62.3% of the classification is correct. The results show that this method is dependent on the target cameras and the number of cameras used. Only the Red channel is considered in this paper, thus for a better result, the correlations within Blue and Green channels and across the channels may give a better result. 3.4. Using Image Features Kharrazi et al [7] identify a set of image features that can be used to uniquely classify a camera model. The 34 proposed features are categorized into 3 groups: Color Features, Image Quality Metrics, and Wavelet Domain Statistics. Features are extracted from images of two cameras, which are then used to train and test the classifier. The result is as high as 98.73% for uncompressed images and 93.42% for JPEG images compressed with a quality factor of 75. The accuracy rate drops to 88% when five cameras are used. Tsai et al [8] also has a similar study for this method using different camera sets. The reported accuracy rate for cameras with similar or closely related CCD is low (67.48%). Hence, this method does not work well for cameras with similar CCD and is unsuitable for identifying source cameras of the same model. Furthermore, it requires all cameras to take images of the same content and resolution, which is not easy in practice. 4. IMAGE FORGERY DETECTION 4.1. Using JPEG Quantization Tables Digital cameras generally use JPEG compression to encode images and different manufacturers typically configure their devices with different compression levels and parameters. Farid [10] exploits this difference by extracting the JPEG quantization table from an image and comparing it against a database of known digital cameras for source identification. Likewise, it can be compared against a database of photo-editing software for signs of tampering. Out of 204 digital cameras used for the experiments, 62 cameras had unique quantization table while the remaining tables fall into equivalence classes ranging from 2 to 28 in size. Using 5 different versions of Adobe Photoshop, an image (presumably uncompressed) is saved at each of the 13 compression levels for each version and it was found that the JPEG quantization tables used were different from those of the 204 cameras. Thus, by detecting the presence of JPEG quantization tables unique to any particular photo-editing software, it can be determined if the image is authentic or was previously tampered with and saved using a photo-editing software. Often, the image output from the camera is already compressed in JPEG format and if edited using editing software, there will exist a double JPEG compression problem, which Popescu et al look at in another paper [21]. 4.2. Using Chromatic Aberration Johnson et al [11] check for the inconsistency of lateral chromatic aberration across an image as a sign of tampering. The authors model lateral chromatic aberration as the expansion or contraction of a color channel with respect to one another, which results in a misalignment of the color channels. Model parameters are sought to bring the color channels back into alignment and a metric based on mutual information is used to quantify the alignment. The error between the local and global model parameters is quantified by computing the average angular error between the displacement vectors at every pixel. If the average angular error exceeds a certain threshold, it is likely that aberration has been inconsistent across the image due to forgery. From experiments, the average angular error is 14.8° with around 98.0% of the errors below 60°. For forensic purposes, the image is tested in blocks and if the block’s local estimate differs from the global estimate by more than 60°, it is considered to be inconsistent with the global estimate and indicates signs of tampering. One apparent weakness is that it is difficult to estimate chromatic aberration from a block with little or no spatial frequency content, such as a largely uniform patch of sky. Therefore, if the manipulated regions of the image consist of content with little spatial frequency (e.g. concealment of features in the sky), it is unlikely to be detected by the algorithm. 4.3. Using Lighting Johnson et al [12] propose a technique of detecting inconsistencies in the direction of the illuminating light source for each object or person in an image using a 2-D model, which builds upon the work by Nillius et al [16]. Three different situations – infinite, local and multiple light sources – are tested to determine the error in the estimated light source direction relative to the actual direction. The errors are typically below 2° except for the infinite light source case where the estimated light direction of an object with non- constant reflectance yielded an error of 10.9°. When tested on sample images, the algorithm is successful in detecting contradicting light source directions. While this technique should work well for outdoor scenes where the Sun is often the only light source, indoor scenes with multiple light sources would make analysis complicated due to multiple occluding boundaries. 4.4. Using Camera Response Function (CRF) Hsu et al [13], [22] propose a method of detecting image splicing using geometry invariants and camera response function (CRF). This idea is similar to the work by Lin et al [17] that detects for splicing by observing for abnormality in the camera response functions (CRF). The suspected splicing boundary is first manually identified. The geometry invariants from the pixels within each region on either side of this boundary are computed and used to estimate the CRF. The CRFs from each region are then checked for consistency with each other using cross-fitting techniques. If the data from one region fits well to the CRF from another region, this image is likely to be authentic, and spliced if otherwise. Finally, the cross-fitting errors from each region are represented using a 6-dimensional vector and fed into a RBF SVM classifier to classify into authentic or spliced. Only images in RAW or BMP format are tested and each spliced image is created in Adobe Photoshop using authentic images from 2 cameras with no post-processing to focus on the effects of splicing. The classification accuracy in 6 runs is 87.55% with the spliced image detection rate as high as 90.74%. However, the false acceptance rate (FAR) is also at least 15.58%. Even though the accuracy is reasonably high, only uncompressed images have been tested. Whether this technique would work well for JPEG compressed images remains unknown. Furthermore, spliced images created from original images taken by the same camera, or even the same model, are unlikely to be detected as forgery. 4.5. Using Bicoherence and Higher Order Statistics Based on Farid’s [19] earlier success in applying bicoherence features for human-speech splicing detection, Ng et al [14], [18] investigate the prospect of using bicoherence features to detect for the presence of abrupt discontinuities in an image, or the absence of optical low-pass property, as a sign of splicing. Besides using the original features that describe the mean of magnitude and phase entropy, the authors propose 2 new methods to augment the performance: (1) estimating the bicoherence features of the authentic counterpart, and (2) incorporating image features that capture the characteristics of different object interfaces. Using SVM classification, the mean accuracy obtained is 71.48%. Although the initial results are promising, the accuracy of 71.48% is not very high and more effective features must be derived to model the sensitivity of bicoherence due to splicing. 4.6. Using Robust Matching Fridrich et al [15] focus on the detection of a particular type of forgery, the copy-move attack, where a part of an image is cloned or duplicated elsewhere in the same image, usually to conceal an important feature. Popescu et al [20] have a similar research. For uncompressed images, matching is carried out between blocks of size B x B to detect for exact replicas. To extend this idea to images saved in lossy JPEG format, instead of directly matching the pixel representation of each B x B block, the authors use a robust representation consisting of quantized DCT coefficients. Experiments on sample altered images have produced good results wit