Learning and Individual Differences 100 (2022) 102230 Available online 21 October 2022 1041-6080/© 2022 Elsevier Inc. All rights reserved. Spatial processing rather than logical reasoning was found to be critical for mathematical problem-solving Mingxin Yu a , b , c , 1 , Jiaxin Cui c , d , 1 , Li Wang a , b , 1 , Xing Gao c , d , Zhanling Cui c , d , Xinlin Zhou a , b , c , * a State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China b Advanced Innovation Center for Future Education, Beijing Normal University, China c Research Association for Brain and Mathematical Learning, China d College of Education, Hebei Normal University, Shijiazhuang 050024, China A R T I C L E I N F O Keywords: Logical reasoning Spatial processing Mathematical problem-solving A B S T R A C T Students' ability to solve mathematical problems is a standard mathematical skill; however, its cognitive cor- relates are unclear. Thus, this study aimed to examine whether spatial processing (mental rotation, paper folding, and the Corsi blocks test) and logical reasoning (abstract and concrete syllogisms) were correlated with math- ematical problem-solving (word problems and geometric proofing) for college students. The regression results showed that after controlling for gender, age, general IQ, language processing, cognitive processing (visual perception, attention, and memory skills), and number sense and arithmetic computation skills, spatial pro- cessing skills still predicted mathematical problem-solving and geometry skills in Chinese college students. Contrastingly, logical reasoning measures related to syllogisms did not predict after controlling for these vari- ables. Further, notably, it did not correlate significantly with geometry performance when no control variables were included. Our results suggest that spatial processing is a significant component of math skills involving word and geometry problems (even after controlling for multiple key cognitive factors). 1. Introduction Over the past few decades, an emphasis on mathematical problem- solving has intensified in international mathematical education (Niss et al., 2017), including word problems and geometry problems solving. Additionally, numerous studies have investigated the cognitive mecha- nisms underlying mathematical problem-solving (e.g., Boonen et al., 2013; Cummins et al., 1988; Hegarty & Kozhevnikov, 1999). Further- more, understanding these mechanisms can help us improve mathe- matical education. Spatial processing and logical reasoning are two crucial general cognitive correlates of mathematical problem-solving (e.g., Chuderski & Jastrzebski, 2018; Duffy et al., 2020; Gomez-Veiga et al., 2018; Hawes & Ansari, 2020; Kleemans et al., 2018; Rothenbusch et al., 2018). How- ever, mathematical problem-solving, logical reasoning and spatial pro- cessing are supported by some other cognitive factors, including attention, processing speed, memory, language processing, etc. (e.g., Bull & Sherif, 2001; Fuchs et al., 2006; Fürst & Hitch, 2000; Gutierrez et al., 2019; Knauff et al., 2003; Mayer et al., 1984; Swanson & Kim, 2007). Therefore, to explore the independent prediction of spatial pro- cessing and logical reasoning on mathematical problem-solving (espe- cially word problems and geometry) is necessary to control these other related cognitive factors that influence mathematical problem-solving, logical reasoning and spatial processing. 1.1. Spatial processing in mathematical problem-solving Spatial processing is the ability to represent, transform, generate, and recall visual information (Linn & Petersen, 1985). This multidi- mensional concept includes a series of psychological manipulations involving visual information (Uttal et al., 2013). Many tasks have been used to measure spatial processing, including paper folding, three- dimensional (3D) mental rotation, and the Corsi block test. Paper folding is a classic task for measuring mental manipulation and spatial visualisation (e.g., Boonen et al., 2013; Boonen et al., 2014; Burte et al., 2017; Wei et al., 2012). 3D mental rotation is a typical task for measuring mental rotation ability (e.g., Boonen et al., 2013; Boonen et al., 2014; Delgado & Prieto, 2004; Oostermeijer et al., 2014; Tolar * Corresponding author at: State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China. E-mail address: zhou_xinlin@bnu.edu.cn (X. Zhou). 1 Mingxin Yu, Jiaxin Cui and Li Wang are considered as Co-first authors. Contents lists available at ScienceDirect Learning and Individual Differences journal homepage: www.elsevier.com/locate/lindif https://doi.org/10.1016/j.lindif.2022.102230 Received 24 March 2021; Received in revised form 22 September 2022; Accepted 9 October 2022 Learning and Individual Differences 100 (2022) 102230 2 et al., 2009). Finally, the Corsi blocks test measures spatial working memory (e.g., Andersson & Lyxell, 2007; Berg, 2008; Geary et al., 2008). On the one hand, a large number of previous studies have revealed the close relationship between spatial processing and mathematical problem-solving. First, there was significant correlation between spatial processing and mathematical problem-solving, not only for children (e.g., Bates et al., 2021; Gilligan et al., 2018), but also for adults (e.g., Wei et al., 2012; Xie et al., 2020). For example, Gilligan et al. (2018) explored the developmental relationships between mathematics and spatial skills in children aged 6 – 10 years and found that overall spatial skills explained 5 % – 14 % of the variation across three mathematics performance measures (standardized mathematics skills, approximate number sense, and number line estimation skills). In this study, only language and age were included as control variables. Wei et al. (2012) found a close correlation between spatial processing ability and advanced mathe- matical processing of college students, controlling factors such as age, gender, general intelligence and language processing. Second, the spatial processing can predict future mathematical achievement (e.g., Blatto-Vallee et al., 2007; Kytt ̈ al ̈ a & Bj ̈ orn, 2014). For example, Blatto-Vallee et al. (2007) found that spatial relationship competence (measured with Primary Mental Abilities Spatial Relations Test, Optometric Extension Program, 1995, and Revised Minnesota Paper Form Board Test, Likert & Quasha, 1994) could explain over 20 % of the variance in problem-solving scores among junior high school, high school, and college students. Third, training studies showed that the training of spatial processing ability can promote mathematical problem-solving (e.g., Cheng & Mix, 2013; Lowrie et al., 2017; Schmitt et al., 2018). For example, one study showed that training students to improve mental rotation and spatial visualisation in first- and sixth-grade students led to significant enhancement in mathematics scores (Cheng & Mix, 2013). Fourth, the close association can also be found between spatial processing and different types of mathematical problem-solving, including word problem solving (e.g., M ̈ annamaa et al., 2012; Mix et al., 2016) and geometric proofing (e.g., Harris et al., 2021; Karaman & To ̆ grol, 2009). For instance, Harris et al. (2021) found that spatial pro- cessing significantly predicted geometric performance in fifth-and eighth-graders. Furthermore, Mix et al. (2016) researched the re- lations among various spatial and mathematics skills were assessed in a cross-sectional study of 854 children from kindergarten, third, and sixth grades (i.e., 5 to 13 years of age). The results showed that the mathe- matical tasks that predicted the most significant variance in spatial skill were place value, word problems, calculation, fraction concepts, and algebra. On the other hand, the close association could be due to the involvement of spatial model constructed during problem solving. For example, researchers have put forward the concept of “ problem space ” in problem solving for a long time. The problem space is extracting external problem environment as an internal representation, and search solving path in this mental space (Newell & Simon, 1972). Hawes and Ansari (2020) proposed a spatial modelling account, which indicates that spatial visualisation provides a “ mental blackboard ” of which nu- merical relations and operations in mathematical problem-solving can be modelled and visualized. The spatial model is a type of structure reflecting the association of key problem information (Boonen et al., 2014; Hegarty & Kozhevnikov, 1999). For example, Hegarty and Kozhevnikov (1999) showed that schematic spatial representations (representing the quantity relations between objects and imagining spatial transformations, such as a 15 m long path with trees spaced 5 m apart) were associated with success in mathematical problem-solving. Contrastingly, pictorial representations (constructing vivid and detailed visual pictures, such as a tree with leaves) were negatively correlated with success. 1.2. Logical reasoning in mathematical problem-solving Logical reasoning refers to making deductions and reaching novel conclusions according to given premises (Markovits & Doyon, 2010), also namely deductive reasoning. Math curricula often treat logical reasoning as a core ability. In the 1980s, educators even began to discuss the role of logic in teaching proofs, emphasising that logical reasoning should be brought into mathematics teaching and that teachers should pay more attention to logical reasoning (Durand-Guerrier et al., 2011). China's mathematics curriculum standards in compulsory education (2011 edition) point out that reasoning ability is one of the ten core abilities for mathematics. Although mathematics and logical reasoning are generally thought to be closely related, however, there is essential distinction between them, mathematical problem-solving typically rely on mathematical knowledge, but logical reasoning relies on rule derivation. If premise is correct or incorrect does not affect the correctness of logic reasoning. From the perspective of logical thinking, people usually do not follow the rules of rational thinking but rather the facts and experiences. Many logical reasoning errors have been shown to reflect the lack of application of logic rules in logical thinking (e.g., Brisson et al., 2018; Dias & Harris, 1988; Hawkins et al., 1984; Nys et al., 2022; Scribner, 1977), including ‘empirical bias ’ (Scribner, 1977) and content effect (Brisson et al., 2018). ‘Empirical bias ’ refers to people often making conclusions based on their personal knowledge and experience (Scrib- ner, 1977). For example, Dias and Harris (1988) found that children perform worse when the premise of syllogistic reasoning contradicts empirical facts (e.g., all fish live in trees, Tot is a fish, and Tot lives in trees). In addition to empirical bias, the content effect also shows that deductive reasoning performance is influenced by reason content. Brisson et al. (2018) and Nys et al. (2022) showed that logical reasoning is controlled by the background knowledge of the participants and the reason content for both adults and children. However, for mathematical problem-solving, people usually follow the mathematical knowledge rather than rule deduction. For example, Studies by mathematics edu- cators have shown that students tend to use empirical arguments rather than deductive proofs when solving geometric problems (e.g., Balacheff, 1988; Chazan, 1993; Martin & Harel, 1989). Although the relationship between logical reasoning and mathe- matics has attracted the extensive attention, only a few empirical studies have shown an association between logical reasoning and mathematical problem-solving. There have been two studies on the association between mathe- matical problem-solving and relational reasoning, a type of logical reasoning (Morsanyi et al., 2013, 2017). The two studies showed consistent correlation. For example, Morsanyi et al. (2017) showed that relational reasoning is associated with number line estimation and open mathematical word problem-solving for adults after controlling for general intelligence. There have been three studies on the relationship between syllogism and mathematical problem-solving, without consis- tent findings (Duque de Blas et al., 2021; Gomez-Veiga et al., 2018; Kleemans et al., 2018). Two of the studies found correlation between syllogism and mathematical problem-solving (Gomez-Veiga et al., 2018; Kleemans et al., 2018). For example, Kleemans et al. (2018) showed that syllogistic reasoning in fifth-grade students was related to their perfor- mance on geometric problems, controlling for age, sex and general in- telligence (Kleemans et al., 2018). Another study that did not find the association (Duque de Blas et al., 2021). There have been four studies on the association between mathematical problem-solving and conditional reasoning, with inconsistent findings (Duque de Blas et al., 2021; Gomez-Veiga et al., 2018; Morsanyi et al., 2017; Wong, 2018). Two of the studies found correlation (Gomez-Veiga et al., 2018; Morsanyi et al., 2017), but two studies did not (Duque de Blas et al., 2021; Wong, 2018). For example, Wong (2018) found that after controlling for intelligence, working memory and language processing, mathematical problem- solving ability of fourth-grade students was significantly correlated M. Yu et al. Learning and Individual Differences 100 (2022) 102230 3 with conditional reasoning, but this significant correlation disappeared after further controlling for calculation and numerical sentence con- struction ability. All the 6 studies controlled general intelligence measured with non- verbal matrix reasoning. Only two studies (Duque de Blas et al., 2021; Wong, 2018) did not show the association. The only difference between the two studies and other four studies is that the two studies controlled additional language processing or additional number sentence con- struction and calculation. It seems that how to control covariates could be critical to check the association. Additionally, Duque de Blas et al.'s study (2021) used two types of logical reasoning (conditional reasoning and syllogistic reasoning), showing consistent findings for the two types of reasoning. Gomez-Veiga et al.'s study (2018) also used two types of logical reasoning (conditional reasoning and syllogistic reasoning), also showing consistent findings for the two types of reasoning. Therefore, it could be effective to apply one type of reasoning to explore the association between logic reasoning and mathematical problem-solving. Moreover, syllogism could be a common form to logic, because “ Scholastic logicians thought that almost all arguments purporting to be logical could be expressed in syllogism ” (p427, in a review by Khemlani & Johnson-Laird, 2012). Combining the two points, this study focuses on the relationship between logical reasoning ability measured by syllogism and mathematical problem- solving. 1.3. Present study Numerous studies have shown that spatial processing skills are pre- dictors of mathematical problem-solving skills, but relatively few studies have examined logical reasoning, and their results are inconsistent. Thus, the present study explore the independent prediction of logical reasoning and spatial processing on mathematical problem-solving, including the important control variables (e.g., attention, general IQ, memory, language), while addressing a specific question about the relative importance of spatial vs. logical deductive reasoning for math- ematical problem-solving. From a cognitive point of view, some cognitive factors (including attention, processing speed, memory, language, etc.), that could be involved when participants performed the tasks to measure mathemat- ical problem-solving, logical reasoning and spatial processing (e.g., Bull & Sherif, 2001; Fuchs et al., 2006; Fürst & Hitch, 2000; Gutierrez et al., 2019; Knauff et al., 2003; Mayer et al., 1984; Swanson & Kim, 2007). First, previous researches show that language comprehension, compu- tation, processing speed, working memory for verbal and visual-spatial information, and other factors impact mathematical problem-solving. For example, the execution of a math problem is based on the reten- tion of either verbal or visuospatial information (Cornoldi et al., 2012; Re et al., 2016). Meanwhile, processing speed is the best predictor of arithmetical competence in 7-year-old students (Bull & Johnston, 1997; Fuchs et al., 2006; Swanson & Kim, 2007). Mayer et al. (1984) suggest that solving mathematical problems requires good language under- standing and relevant arithmetic operations and other processes to achieve solutions (Mayer et al., 1984). Second, studies have shown that linguistic processing, spatial processing, memory, attention, among others, may be associated with logical reasoning (e.g., Knauff et al., 2003; Yang et al., 2009). Further, nonverbal matrix reasoning and lin- guistic processing may be related to spatial processing (e.g., Colom et al., 2004; Gutierrez et al., 2019). Therefore, it is necessary to control some cognitive factors (including attention, processing speed, memory, lan- guage, etc.), that could be involved when participants performed the tasks to measure mathematical problem-solving, logical reasoning and spatial processing. Therefore, we can explore the independent predic- tion of logical reasoning and spatial processing on mathematical prob- lem-solving. Two typical mathematical problems were involved in the current investigation: word problems and geometry proofs. The problems were presented differently, but the spatial processing was fundamental in solving both problems. However, logical reasoning relies on empirical knowledge rather than rules, leading to incorrect judgements. Con- trastingly, mathematical problem-solving needs to be based on mathe- matical knowledge. Therefore, logical reasoning is not a necessary psychological processing component in mathematical problem-solving. In conclusion, the current investigation is to explore the independent prediction of logical reasoning and spatial processing on mathematical problem-solving, and hypothesised that spatial processing has a unique contribution to mathematical problem-solving while logical reasoning does not. To verify this hypothesis, we conducted two studies, and their characteristics are presented below. 2. Study 1 Study 1 was designed to examine the independent roles of spatial and logical reasoning in word problem-solving after controlling other factors. 2.1. Methods 2.1.1. Participants A total of 360 participants (179 men, 181 women; mean age = 20.84, SD = 1.76; age range = 17.3 – 27.8 years) were recruited from 10 uni- versities in China. All the participants were right-handed native Man- darin speakers with normal or corrected-to-normal visual acuity. Before participation, participants provided written informed consent after the investigation was fully explained. This study was approved by the institutional review board of the State Key Laboratory of Cognitive Neuroscience and Learning. 2.1.2. Tests Seventeen tests were conducted. The testing data has been uploaded to the psychological research platform (www.dweipsy.com/lattice). (Wei et al., 2012; Wei et al., 2016; Zhou et al., 2015). An illustration of the trial for each test is shown in Fig. 1. 2.1.2.1. Mathematical word problems. A typical mathematical problem- solving exercise assesses the ability to solve word problems, but not arithmetic problems. All 15 problems involve the application of complex algebra. An applied math problem was presented on the computer screen in each trial until participants typed their answers to an input box under the problem. The test duration was limited to 6 min. 2.1.2.2. Logical reasoning 2.1.2.2.1. Abstract syllogistic reasoning. Three sentences were pre- sented on the screen in each trial, two of which expressed premises, and the third one expressed the conclusion. All the sentences used mean- ingless letters to make the premises abstract. The test consisted of 32 trials and was limited to 3 min. 2.1.2.2.2. Concrete syllogistic reasoning. All the sentences used words from their daily lives. Otherwise, the test would be identical to the abstract reasoning test. 2.1.2.3. Spatial processing 2.1.2.3.1. 3D mental rotation. A 3D figure was presented on the top half of the screen in each trial, along with two other 3D figures. One of the lower items was formed by rotating the upper image, and the other was a mirror image of the upper figure. The rotation angle ranged from 15 ◦ to 345 ◦ . Participants were asked to judge which of the bottom fig- ures matched the upper figure after the rotation. This test included 180 trials and lasted 3 min. 2.1.2.3.2. Paper folding. The participants were asked to imagine the folding and unfolding of pieces of paper. A 2D figure on the top of the screen in each trial was a square piece of folded paper, and a hole M. Yu et al. Learning and Individual Differences 100 (2022) 102230 4 punched in the direction of an arrow. The number of holes created depended on when the hole was punched (i.e., after one-fold, the punch made two holes). One of the five figures at the bottom of the screen correctly showed where the holes would be located when the paper was completely unfolded. This test included 18 trials and was limited to 4 min. 2.1.2.3.3. Corsi blocks test. The dots were sequentially presented in a 3 × 3 grid on a computer screen. Each dot was presented for 1000 ms, with a blank interval of 1000 ms between the dots. After all the dots in the trial were presented, the participants used a mouse to click the grid according to the position and order of the dot presentation. Participants finished all 10 trials. The number of dots in each trial ranged from three to seven. Each number of dots was presented twice in turn. 2.1.2.4. Arithmetic computation. In each trial, an arithmetic problem appeared on the screen. Participants had 15 s to mentally compute the answer and type it into the input box. There were 40 trials, including addition, subtraction, multiplication, and division problems, with in- tegers or decimals as operators. 2.1.2.5. Number sense 2.1.2.5.1. Numerosity comparison. Two dot arrays appeared simul- taneously on the screen for 200 ms in each trial. There were 120 trials in all. The participants were required to determine which side of the two dots had the highest number. 2.1.2.6. General IQ 2.1.2.6.1. Nonverbal matrix reasoning. Each trial contained a figure with a missing segment presented on the screen and 6 – 8 candidate segments. Participants were asked to use the mouse to choose which candidate completed the figure according to the figure's inherent regu- larity. In total, 76 trials were conducted. 2.1.2.7. Attention 2.1.2.7.1. Visual searching. Three ‘p ’ s and two ‘d ’ s were presented interspersed in a line for each trial. Each letter had one to four dashes configured individually or in pairs, above or below each letter. The target symbol was a ‘d ’ with two dashes, regardless of the location of the dashes (two above, two below, or one above and one below). The test included 240 trials and was limited to 4 min. 2.1.2.8. Memory 2.1.2.8.1. Visual memory. There were encoding and recalling ses- sions. During the encoding session, participants were asked to memorise a series of pictures presented on the screen (A total of 40 pictures) (for example, a line-drawing tree). During the recalling session, 80 pictures (40 are old, and 40 are new) were presented, and participants they made a judgment on if they encoded the pictures during encoding session. Subjects needed to complete all the trials to ending the test. 2.1.2.8.2. Digit span. The digits were presented aurally at one digit per second frequency in each trial, and participants were asked to remember them. After hearing all digits, they were asked to type them in the same order they had been heard, or in the reverse order. The initial trial had three digits. As the trials progressed, the number of digits gradually increased. The test was stopped when the participants pro- vided three consecutive incorrect answers. 2.1.2.9. Visual perception 2.1.2.9.1. Figure matching. Two sets of complex figures were pre- sented side-by-side on the screen for 400 ms for each trial. The left set contained only one figure, and the right set three figures. Participants were asked to judge whether the picture on the left side also appeared on the right side. The test included 120 trials grouped into three 40-trial sessions. Subjects needed to complete all the trials to ending the test. 2.1.2.10. Response and decision speed 2.1.2.10.1. Choice reaction time. A white dot was presented on a black screen to the left or right of the white fixation cross. The partici- pants were asked to judge the left and right positions of the dot and respond by pressing the corresponding buttons. The test consisted of 30 trials (half with a dot on the left and the other half on the right). 2.1.2.11. Language processing 2.1.2.11.1. Sentence completion. Materials were adapted from recent language examinations used in China for Grades 1 to 12. In each trial, a sentence with one missing word was presented at the centre of the screen. The sentences and choices remained on the screen until partic- ipants responded. There were 120 trials — this was a 5-min test. 2.1.2.11.2. Reading comprehension. The materials used in the test were adapted from recent language examinations used in China for entrance into college. Several paragraphs were presented on the screen in each trial, and a question about their content was presented below. Participants choose one of the four choices provided. The test consisted of 45 trials and was limited to 8 min. Fig. 1. Schematic of tests used in present study. M. Yu et al. Learning and Individual Differences 100 (2022) 102230 5 2.1.3. Procedure The battery contained 17 computerised tests. The participants completed the tests in a psychological laboratory. Participants were asked to register their demographic information before the experiment began. Before each test, the experimenter explained the instructions presented on the computer screen, and the participants completed a practice session before the formal test. All participants completed 17 tests. The entire study lasted 2 h, and the participants were allowed to rest for half an hour. For the numerosity comparison, abstract syllogism reasoning, con- crete syllogism reasoning, 3D mental rotation, figure matching, simple reaction time, and sentence completion, participants input their left/ right choices by pressing the ‘Q ’ or ‘P ’ keys on a computer keyboard, respectively. Participants were asked to use a mouse to make their choices for the paper folding, spatial working memory, and reading comprehension tests, which had more than two choices. For the word problem, arithmetic computation, and digit span tests, participants entered their answers by typing on a numeric keyboard. 2.1.4. Data analyses The indices for all the tests are displayed in Table 1. The time-limited tests, including abstract syllogism reasoning, concrete syllogism reasoning, three-dimensional mental rotation, visual memory, sentence completion, used the adjusted number of correct responses as their score. Unanswered items are not counted as incorrect items. The guessing effect in the current investigation was excluded by using the formula “ S = R-W/(n 1) ” (S: the adjusted score, R: the number of correct responses, W: the number of incorrect responses, n: number of alternative responses to each item) to adjust the score (Guilford, 1936). This was done to control for the guessing effect (Cirino, 2011; Hedden & Yoon, 2006; Salthouse, 1994; Salthouse & Meinz, 1995). Before the formal analyses, we used the winsorising method to check the extreme values of the data (Hogg, 1979), where the extreme values beyond the three standard deviations were replaced by the values cor- responding to plus or minus three standard deviations. Descriptive statistics for each participant were generated for each of the 17 tests (mean, standard deviation, and half-split reliability). Sub- sequently, Pearson's correlations were used to investigate the relation- ships among all 17 tests. Then, partial correlation analyses of all variables with mathematical problem-solving after controlling for age, gender, or age, gender, and nonverbal matrix reasoning were conducted. Next, a series of linear hierarchical regression analyses were performed to determine the unique contribution of logical reasoning and spatial processing in word problems. Finally, path analysis was conducted. 2.2. Results The means and standard deviations of the scores and the split-half reliabilities for all 17 tests are presented in Table 1. The Pearson's correlation coefficients among the scores for all 17 tests are displayed in Table 2, with a Bonferroni correction. The cor- rected significance in Table 2 was set at 0.05, corresponding to a p -value of 0.0003 (0.05/171 correlations). The partial correlation results with Bonferroni correction (uncor- rected p = 0.0003, 0.05/171 correlations) indicated that calculation, language processing, memory, attention, number sense and general IQ were also significantly correlated with word problems in addition to spatial processing and logical reasoning (see Table 3). Table 4 shows whether logical reasoning or spatial processing had a unique contribution to word problems. A hierarchical multiple regres- sion with Bonferroni correction was used (significance set at corrected p < 0.05). Neither type of logical reasoning played a significant role in word problems after controlling for other factors other than spatial processing. Spatial processing explained 5.2 % of the variance after controlling for logical reasoning and other cognitive factors (corrected p < 0.05). Finally, Fig. 2 shows the path model of the structural relationships among all main variables. The latent variable ‘spatial processing ’ had three manifest measures (three-dimensional mental rotation, paper folding, and Corsi blocks test), and logical reasoning had two manifest measures (abstract and concrete syllogism reasoning). All variables in the model were residuals controlling for age, gender, and other cognitive factors. The hypothesised model was a good fit for the data ( χ 2 (7) = 5.28, p = 0.626, RMSEA = 0.05, CFI = 1.00, SRMR = 0.02). The sig- nificance level of path coefficients in the path model was Bonferroni- corrected and set to 0.05, corresponding to the original alpha of 0.025 (0.05/2 links). 2.3. Discussion The results show that after controlling for an extensive range of critical variables, measures of spatial skills significantly predicted word problems in college students. Contrastingly, measures of logical reasoning skills did not. Table 1 Means and standard deviations of scores for all tests in Study 1. Test Index Mean ( SD ) Split-half reliability 1. Mathematical word problems Number of correct responses 7.6 (3.0) 0.79 2. Abstract syllogism reasoning Adjusted no. of correct trials 4.0 (5.3) 0.78 3. Concrete syllogism reasoning Adjusted no. of correct trials 6.6 (5.7) 0.80 4. Three-dimensional mental rotation Adjusted no. of correct trials 27.0 (9.6) 0.93 5. Paper folding Number of correct responses 8.5 (2.9) 0.86 6. Corsi blocks test Accuracy (%) 83.1 (5.6) 0.96 7. Non-verbal matrix reasoning Number of correct responses 29.0 (5.6) 0.89 8. Visual searching Number of correct responses 47.6 (30.8) 0.96 9. Visual memory Adjusted no. of correct trials 69.7 (8.9) 0.78 10. Digit span (forward) Maximum number of correct trials 9.7 (2.1) / 11. Digit span (backward) Maximum number of correct trials 8.2 (2.0) / 12.1 Figure matching (ACC) Accuracy (%) 77.8 (10.5) 0.88 12.2 Figure matching (RT) Reaction time (ms) 897 (148) 0.98 13. Simple reaction time Reaction time (ms) 366 (66) 0.96 14. Sentence completion Adjusted no. of correct trials 42.0 (6.7) 0.84 15. Reading comprehension Number of correct responses 9.0 (3.4) 0.72 16. Arithmetic computation Number of correct responses 19.7 (5.3) 0.85 17.1 Numerosity comparison (ACC) Accuracy (%) 81.3 (7.5) 0.81 17.2 Numerosity comparison (RT) Reaction time (ms) 534 (91) 0.98 Note : Adjusted no. of correct trials = S = R-W/(n 1) (S: the adjusted score, R: the number of correct responses, W: the number of incorrect responses. n: the number of alternative responses to each item). This adjustment was made to control for the effect of guessing in multiple-choice tests. Accuracy = 100 |(Response-Standard answer)| / (Standard answer + |(Response-Standard answer)|) × 100. M. Yu et al. Learning and Individual Differences 100 (2022) 102230 6 However, while paper-folding skill correlated with the ability to solve mathematical word problems, 3D mental rotation and spatial working memory did not correlate after controlling for other factors. This might be because mathematical word problems require visualising the actual problem situation, building a structured spatial model, and carrying out multiple problem attributes, such as digital information, complex knowledge diagrams, and multi-step operations (Zhang, 2016). This process is similar to the numerous problem attributes of operational conversion in paper folding (‘punch locations ’ , ‘number of folds ’ , and ‘types of folds ’ ) (Burte et al., 2019). Contrastingly, the problem attri- butes of mental rotation may be more about the rotation angle, and the problem attributes of the Corsi blocks test may be more about the memory of the number and location of the points. Therefore, the prob- lem attributes of paper folding are more than those of mental rotation and the Corsi blocks test, which are more similar to the structured spatial model constructed by word problem-solving. Therefore, neither had a significant effect on the mathematical word problems. Additionally, the results showed that when spatial measures were not included, nonverbal IQ was predicted, while nonverbal IQ was no longer significant when spatial measures were included. This result is as follows: Studies have shown a relationship between the nonverbal measure of IQ and spatial reasoning. Colom et al. (2004) administered the Advanced Progressive Matrices Test (APM) and the Spatial Rotation Table 2 Correlations among all the measure scores based on Pearson's correlations in Study 1. 1 2 3 4 5 6 7 8 9 10 11 12.1 12.2 13 14 15 16 17.1 17.2 1. Mathematical word problems – 2. Abstract syllogism reasoning 0.30* – 3. Concrete syllogism reasoning 0.33* 0.43* – 4. Three-dimensional mental rotation 0.35* 0.25* 0.20* – 5. Paper folding 0.51* 0.28* 0.30* 0.43* – 6. Corsi blocks test 0.15 0.17 0.17 0.18 0.19* – 7. Nonverbal matrix reasoning 0.41* 0.21* 0.28* 0.25* 0.52* 0.21* – 8. Visual searching 0.26* 0.22* 0.20* 0.18 0.21* 0.11 0.26* – 9. Visual memory 0.10 0.11 0.09 0.11 0.05 0.01 0.17 0.13 – 10. Digit span (forward) 0.13 0.09 0.15 0.09 0.14 0.12 0.25* 0.11 0.04 – 11. Digit span (backward) 0.27* 0.27* 0.25* 0.23* 0.25* 0.14 0.34* 0.22* 0.21* 0.46* – 12.1 Figure matching (ACC) 0.17 0.23* 0.20* 0.36* 0.17 0.20* 0.30* 0.22* 0.35* 0.11 0.31* – 12.2 Figure matching (RT) 0.03 0.15 0.01 0.15 0.04 0.05 0.02 0.01 0.33* 0.03 0.12 0.40* – 13. Simple reaction time 0.14 0.04 0.11 0.21* 0.17 0.12 0.16 0.23* 0.06 0.03 0.14 0.16 0.21* – 14. Sentence completion 0.20* 0.21* 0.23* 0.03 0.12 0.06 0.22* 0.15 0.15 0.12 0.14 0.14 0.05 0.19* – 15. Reading comprehension 0.28* 0.05 0.19* 0.07 0.21* 0.04 0.24* 0.05 0.06 0.11 0.01 0.03 0.18 0.02 0.33* – 16. Arithmetic computation 0.47* 0.40* 0.29* 0.33* 0.42* 0.12 0.44* 0.28* 0.13 0.18 0.35* 0.29* 0.15 0.13 0.15 0.05 – 17.1 Numerosity comparison (ACC) 0.05 0.15 0.05 0.12 0.02 0.11 0.03 0.05 0.18 0.05 0.12 0.22* 0.53* 0.37* 0.05 0.09 0.13 – 17.2 Numerosity comparison (RT) 0.22* 0.16 0.16 0.29* 0.22* 0.16 0.33* 0.24* 0.36* 0.06 0.25* 0.49* 0.30* 0.03 0.11 0.04 0.27* 0.33* – * p < 0.05, using Bonferroni correction. Table 3 Partial correlations between all control variables, logical reasoning, spatial ability and mathematical word problems and geometry problems. Predictors Mathematical word problems Plane geometry Solid geometry a b a b a b 1. Non-verbal matrix reasoning 0.42* – 0.36* – 0.18 – 2. Abstract syllogism reasoning 0.29* 0.22* 0.20 0.15 0.21 0.18 3. Concrete syllogism reasoning 0.34* 0.26* 0.17 0.11 0.15 0.12 4. Three- dimensional mental rotation 0.33* 0.25* 0.22 0.17 0.07 0.05 5. Paper folding 0.50* 0.36* 0.32* 0.22 0.35* 0.31* 6. Corsi blocks test 0.15 0.07 0.10 0.09 0.01 0.02 7. Visual searching 0.27* 0.19* 0.33* 0.25* 0.23 0.20 8. Visual memory 0.11 0.04 0.07 0.02 0.10 0.07 9. Digit span (forward) 0.14 0.04 0.14 0.03 0.10 0.05 10. Digit span (backward) 0.27* 0.16 0.28* 0.18 0.21 0.17 11.1 Figure matching (ACC) 0.17 0.05 0.12 0.06 0.15 0.12 11.2 Figure matching (RT) 0.03 0.04 0.06 0.09 0.03 0.03 12. Simple reaction time 0.15 0.09 0.03 0.04 0.08 0.05 13. Sentence completion 0.22* 0.15 0.19 0.11 0.14 0.10 14. Reading comprehension 0.29* 0.21* 0.07 0.06 0.06 0.00 15. Arithmetic computation 0.46* 0.34* 0.44* 0.34* 0.31* 0.27* 16.1 Numerosity comparison (ACC) 0.22* 0.10 0.11 0.04 0.02 0.02 16.2 Numerosity comparison (RT) 0.04 0.05 0.00 0.01 0.07 0.06 Note : a/b: Column a represents the control variables are age and gender, and column b represents the control variables are age, gender and nonverbal matrix reasoning. * p < 0.05, using Bonferroni correction. M. Yu et al. Learning and Individual Differences 100 (2022) 102230 7 Test from the Primary Mental Abilities Battery (PMA) to 239 university undergraduates. The results showed that men outperformed women in both tests. However, the male advantage on APM turned out to be non- significant when gender differences in spatial rotation were statistically controlled. Therefore, it is suggested that gender differences in PM could be a by-product of its visuospatial format. Moreover, Gutierrez et al. (2019) found that nonverbal matrix reasoning was highly correlated with spatial processing. Furthermore, the study used two tests that measure spatial processing and one test that measures nonverbal general reasoning ability: Guay's Visualisation of Views Test, Adapted Version (VVT), Mental Rotations Test (MRT), and Raven's Advanced Progressive Matrices Test (APMT). The results showed that the spatial processing scores measured by the VVT and MRT showed a positive correlation with nonverbal general reasoning ability scores (APMT), supporting the idea that these abilities are linked. Reading comprehension and arithmetic are essential components of word problems. Mathematical word problems are based on real-world events and relationships, are stated in natural language, and are based on mathematical operations (Bassok, 2001). Therefore, mathematical word problems require a certain level of language understanding (e.g., Cummins et al., 1988; De Corte et al., 1985). Further, it is necessary to apply mathematical knowledge, such as knowledge of number addition and subtraction operations (e.g., Nesher, 2020; Sophian & Vong, 1995). 3. Study 2 This study was designed to examine the unique role of logical reasoning and spatial processing in geometric problem-solving. 3.1. Methods 3.1.1. Participants Participants were 209 undergraduate students (106 men, 103 women, mean age = 20.96 years, SD = 1.61 years, age range = 17.8 to 27.8 years) from universities in China. Other information is the same as that in Study 1. 3.1.2. Tests The tests were similar to those in Study 1. However, word problems were replaced with the plane and solid geometric problems (18 total). Table 4 Hierarchica