Download Latest 156-587 Dumps Questions 2026 for Preparation ■ ■ Enjoy 20% OFF on All Exams – Use Code: 2026 Boost Your Success with Updated & Verified Exam Dumps from CertSpots.com https://www.certspots.com/exam/156-587/ © 2026 CertSpots.com – All Rights Reserved 1 / 10 Exam : 156-587 Title : Version : V9.02 Check Point Certified Troubleshooting Expert - R81.20 (CCTE) 2 / 10 1.You run a free-command on a gateway and notice that the Swap column is not zero Choose the best answer A. Utilization of ram is high and swap file had to be used B. Swap file is used regularly because RAM memory is reserved for management traffic C. Swap memory is used for heavy connections when RAM memory is full D. Its ole Swap is used to increase performance Answer: A Explanation: When the free command on a Linux-based system (like a Check Point Gaia gateway) shows a non-zero value in the "Swap" column, it indicates that the system has utilized its swap space. Swap space is a portion of the hard disk designated to act as virtual RAM when the physical RAM is fully utilized. The most direct and accurate explanation for swap usage is that the system's demand for Random Access Memory (RAM) exceeded the available physical RAM, forcing the operating system to move some less frequently used memory pages from RAM to the swap space on the disk. This frees up physical RAM for more active processes. Let's analyze the options: A. Utilization of ram is high and swap file had to be used: This is the correct and fundamental reason. Swap is used precisely because RAM utilization reached a point where the system needed more memory than was physically available. B. Swap file is used regularly because RAM memory is reserved for management traffic: While Check Point gateways handle management traffic, operating systems do not typically use swap "regularly" due to a fixed reservation of RAM for such traffic in a way that would routinely force swapping under normal conditions. If management traffic is excessively high and consumes too much RAM, it would fall under the general case of high RAM utilization. C. Swap memory is used for heavy connections when RAM memory is full: This describes a common cause for high RAM utilization on a firewall. Heavy connections can consume significant memory resources. When this consumption leads to RAM exhaustion, swap will indeed be used. However, option A is a more general and direct explanation of why swap is used, regardless of the specific cause of high RAM utilization. Option C is a specific scenario leading to the condition described in A. D. Its ole Swap is used to increase performance: This statement is incorrect. Swapping to disk is significantly slower than accessing RAM. Therefore, swap usage generally indicates a performance bottleneck (or potential for one) rather than a performance enhancement. While virtual memory (which includes swap) allows a system to run more or larger applications than its physical RAM would normally allow, the act of swapping itself is detrimental to performance. Conclusion: The best answer is A because it directly and accurately describes the immediate reason for swap usage: high RAM utilization necessitating the use of the swap file. Option C, while plausible as a cause of high RAM utilization, is a specific instance, whereas A is the overarching reason swap comes into play. Reference (General Linux/System Administration Principles and supported by CCTE exam preparation materials): This understanding is based on fundamental principles of how operating systems manage memory and swap space. Check Point CCTE R81.20 exam preparation materials also affirm this understanding for similar questions. For instance, a question identical to this one appearing in CCTE exam preparation resources typically points to option A as the correct answer. 3 / 10 2.You modified kernel parameters and after rebooting the gateway, a lot of production traffic gets dropped and the gateway acts strangely. What should you do"? A. Run command fw ctl set int fw1_kernel_all_disable=1 B. Restore fwkem.conf from backup and reboot the gateway C. run fw unloadlocal to remove parameters from kernel D. Remove all kernel parameters from fwkem.conf and reboot Answer: B Explanation: If you have modified kernel parameters (in fwkern.conf, for example) and the gateway starts dropping traffic or behaving abnormally after a reboot, the best practice is to restore the original or a known-good configuration from backup. Then, reboot again so that the gateway loads the last known stable settings. Option A (fw ctl set int fw1_kernel_all_disable=1) is not a standard or documented method for “ undoing ” all kernel tweaks. Option B (Restore fwkem.conf from backup and reboot the gateway) is the correct and straightforward approach. Option C (fw unloadlocal) removes the local policy but does not revert custom kernel parameters that have already been loaded at boot. Option D (Remove all kernel parameters from fwkem.conf and reboot) might help in some cases, but you risk losing other beneficial or necessary parameters if there were legitimate custom settings. Restoring from a known-good backup is safer and more precise. Hence, the best answer: “ Restore fwkem.conf from backup and reboot the gateway. ” Check Point Troubleshooting Reference sk98339 – Working with fwkern.conf (kernel parameters) in Gaia OS. sk92739 – Advanced System Tuning in Gaia OS. Check Point Gaia Administration Guide – Section on kernel parameters and system tuning. Check Point CLI Reference Guide – Explanation of using fw ctl, fw unloadlocal, and relevant troubleshooting commands. 3.What process monitors terminates, and restarts critical Check Point processes as necessary? A. CPM B. FWD C. CPWD D. FWM Answer: C Explanation: CPWD (Check Point WatchDog) is the process that monitors, terminates (if necessary), and restarts critical Check Point processes (e.g., FWD, FWM, CPM) when they stop responding or crash. CPM (Check Point Management process) is a process on the Management Server responsible for the web-based SmartConsole connections, policy installations, etc. FWD (Firewall Daemon) handles logging and communication functions in the Security Gateway. FWM (FireWall Management) is an older reference to the management process on the Management Server for older versions. 4 / 10 Therefore, the best answer is CPWD. Check Point Troubleshooting Reference sk97638: Check Point WatchDog (CPWD) process explanation and commands. R81.20 Administration Guide – Section on CoreXL, Daemons, and CPWD usage. sk105217: Best Practices – Explains system processes, how to monitor them, and how CPWD is utilized. 4.When dealing with monolithic operating systems such as Gaia where are system calls initiated from to achieve a required system level function? A. Kernel Mode B. Slow Path C. Medium Path D. User Mode Answer: A 5.Which of the following commands can be used to see the list of processes monitored by the Watch Dog process? A. cpstat fw -f watchdog B. fw ctl get str watchdog C. cpwd_admin list D. ps -ef | grep watchd Answer: C Explanation: To see the list of processes monitored by the WatchDog process (CPWD), you use the cpwd_admin list command. Option A (cpstat fw -f watchdog): Shows firewall status and statistics for the "fw" context, not necessarily the list of monitored processes. Option B (fw ctl get str watchdog): Not a valid parameter for retrieving the list of monitored processes; “ fw ctl ” deals with kernel parameters. Option C (cpwd_admin list): Correct command that lists all processes monitored by CPWD, their status, and how many times they have been restarted. Option D (ps -ef | grep watchd): This will list any running process that matches the string “ watchd ” but will not specifically detail which processes are being monitored by CPWD. Therefore, the best answer is cpwd_admin list. Check Point Troubleshooting Reference sk97638: Explains Check Point WatchDog (CPWD) usage and the cpwd_admin utility. R81.20 CLI Reference Guide: Describes common troubleshooting commands including cpwd_admin list. Check Point Gaia Administration Guide: Provides instructions for monitoring system processes and verifying CPWD. 6.What tool would you run to diagnose logging and indexing? A. run cpm_doctor.sh B. cpstat mg -f log_server C. run diagnostic view D. run doctor-log.sh 5 / 10 Answer: D 7.You found out that $FWDIR/Iog/fw.log is constantly growing in size at a Security Gateway, what is the reason? A. TCP state logging is enabled B. Its not a problem the gateways is logging connections and also sessions C. fw.log can grow when GW does not have space in logging directory D. The GW is logging locally Answer: B 8.What is the best way to resolve an issue caused by a frozen process? A. Power off the machine B. Restart the process C. Reboot the machine D. Kill the process Answer: D Explanation: When a process is frozen (hung or unresponsive), the typical method to resolve it is to kill the process. On Check Point, you can use cpwd_admin kill -name <ProcessName> or a standard Linux kill -9 <PID> command if necessary. You then allow CPWD (the Check Point watchdog) to restart it, or manually restart it if needed. Other options: A. Power off the machine: This is too drastic and not recommended just for a single frozen process. B. Restart the process: While this sounds viable, you typically must kill the frozen process first, then let WatchDog or an admin restart it. C. Reboot the machine: Similar to powering off — too disruptive for just one stuck process. Hence, the most direct and standard approach: “ Kill the process. ” Check Point Troubleshooting Reference sk97638 – Explanation of CPWD (Check Point WatchDog) and how to manage processes. sk43807 – How to gracefully stop or kill a Check Point process. Check Point CLI Reference Guide – Details on using cpwd_admin commands to kill or restart processes. 9.Which of the following file is commonly associated with troubleshooting crashes on a system such as the Security Gateway? A. tcpdump B. core dump C. fw monitor D. CPMIL dump Answer: B Explanation: When troubleshooting crashes on a Security Gateway (or any Linux-based system), the file type that is typically generated and used for in-depth analysis is a core dump. A core dump captures the memory state of a process at the time it crashed and is critical for root-cause 6 / 10 analysis. Other options: A. tcpdump: A packet capture file, not a crash-related file. C. fw monitor: A Check Point packet capture tool, but not for crash debugging. D. CPMIL dump: Not a common or standard crash dump reference in Check Point. 10.When a User Mode process suddenly crashes, it may create a core dump file. Which of the following information is available in the core dump and may be used to identify the root cause of the crash? i. Program Counter ii. Stack Pointer iii. Memory management information iv. Other Processor and OS flags / information A. iii and iv only B. i and ii only C. i, ii, iii and iv D. Only lii Answer: C Explanation: A core dump file is essentially a snapshot of the process's memory at the time of the crash. This snapshot includes crucial information that can help diagnose the cause of the crash. Here's why all the options are relevant: i. Program Counter: This register stores the address of the next instruction the CPU was supposed to execute. It pinpoints exactly where in the code the crash occurred. ii. Stack Pointer: This register points to the top of the call stack, which shows the sequence of function calls that led to the crash. This helps trace the program's execution flow before the crash. iii. Memory management information: This includes details about the process's memory allocations, which can reveal issues like memory leaks or invalid memory access attempts. iv. Other Processor and OS flags/information: This encompasses various registers and system information that provide context about the state of the processor and operating system at the time of the crash. By analyzing this information within the core dump, you can often identify the root cause of the crash, such as a segmentation fault, null pointer dereference, or stack overflow. Check Point Troubleshooting Reference: While core dumps are a general concept in operating systems, Check Point's documentation touches upon them in the context of troubleshooting specific processes like fwd (firewall) or cpd (Check Point daemon). The fw ctl zdebug command, for example, can be used to trigger a core dump of the fwd process for debugging purposes. 11.Where will the usermode core files located? A. $FWDIRVar/log/dump/usermode B. /var/suroot C. /var/log/dump/usermode D. $CPDIR/var/log/dump/usermode 7 / 10 Answer: D Explanation: Usermode core files are generated when a user mode process crashes. They are located in the $CPDIR/var/log/dump/usermode directory on the Security Gateway or Security Management server. The core files can be used to analyze the cause of the crash and troubleshoot the issue. The core files are named according to the process name, date, and time of the crash. For example, cpd_2023_02_03_16_40_55.core is a core file for the cpd process that crashed on February 3, 2023 at 16:40:55 12.What is the function of the Core Dump Manager utility? A. To determine which process is slowing down the system B. To send crash information to an external analyzer C. To limit the number of core dump files per process as well as the total amount of disk space used by core files D. To generate a new core dump for analysis Answer: C Explanation: The Core Dump Manager (CDM) is a utility that helps manage core dump files on Check Point systems. Its main functions include: Limiting file size and number: CDM can be configured to limit the size of individual core dump files and the total amount of disk space used for core dumps. This prevents core dumps from filling up valuable disk space. Compression: CDM can compress core dump files to reduce their storage size. This is particularly helpful when dealing with large core dumps. Process filtering: CDM allows you to specify which processes should be allowed to generate core dumps. This can help prevent unnecessary core dumps from being created. Remote collection: CDM can be configured to send core dump files to a remote server for analysis. This is useful in environments where direct access to the system generating the core dump is limited. By using CDM, you can effectively manage core dump files and ensure that they are not overwhelming your system's resources. 13.What is the proper command for allowing the system to create core files? A. service core-dump start B. SFWDIR/scripts/core-dump-enable.sh C. set core-dump enable >save config D. # set core-dump enable # save config Answer: C 14.When a user space process or program suddenly crashes, what type of file is created for analysis A. core dump B. kernel_memory_dump dbg C. core analyzer 8 / 10 D. coredebug Answer: A Explanation: When a user space process crashes unexpectedly, the operating system often creates a core dump file. This file is a snapshot of the process's memory at the time of the crash, including information such as: Program counter: This indicates where the program was executing when it crashed. Stack pointer: This shows the function call stack, which can help trace the sequence of events leading to the crash. Memory contents: This includes the values of variables and data structures used by the process. Register values: This shows the state of the processor registers at the time of the crash. Core dump files can be analyzed using debuggers like GDB to understand the cause of the crash. Why other options are incorrect: B. kernel_memory_dump dbg: This refers to a kernel memory dump, which is generated when the operating system kernel itself crashes. C. core analyzer: This is a tool used to analyze core dump files, not the file itself. D. coredebug: This is not a standard term for any type of crash dump file. Check Point Troubleshooting Reference: Check Point's documentation mentions core dumps in the context of troubleshooting various processes, such as fwd (firewall) and cpd (Check Point daemon). You can find information on enabling core dumps and analyzing them in the Check Point administration guides and knowledge base articles. 15.You receive reports from multiple users that they cannot browse Upon further discovery you identify that Identity Awareness cannot identify the users properly and apply the configured Access Roles What commands you can use to troubleshoot all identity collectors and identity providers from the command line? A. on the gateway: pdp debug set IDC all IDP all B. on the gateway: pdp debug set AD all and IDC all C. on the management: pdp debug on IDC all D. on the management: pdp debug set all Answer: A Explanation: To troubleshoot Identity Awareness issues related to user identification and Access Role application, you need to enable debugging for both Identity Collectors (IDC) and Identity Providers (IDP). The command pdp debug set IDC all IDP all on the gateway achieves this. Here's why this is the correct answer and why the others are not: A. on the gateway: pdp debug set IDC all IDP all: This correctly enables debugging for all Identity Collectors and Identity Providers, allowing you to see detailed logs and messages related to user identification and Access Role assignment. This helps pinpoint issues with user mapping, authentication, or authorization. B. on the gateway: pdp debug set AD all and IDC all: This command only enables debugging for Active Directory (AD) as an Identity Provider and all Identity Collectors. It might miss issues related to other Identity Providers if they are in use. C. on the management: pdp debug on IDC all: This command has two issues. First, it should be executed on the gateway, not the management server, as the gateway is responsible for user identification and 9 / 10 policy enforcement. Second, it only enables debugging for Identity Collectors, not Identity Providers. D. on the management: pdp debug set all: While this command might seem to enable debugging for everything, it's not specific enough for Identity Awareness troubleshooting. It might generate excessive logs unrelated to the issue and make it harder to find the relevant information. Check Point Troubleshooting Reference: Check Point Identity Awareness Administration Guide: This guide provides detailed information about Identity Awareness components, configuration, and troubleshooting. Check Point sk113963: This article explains how to troubleshoot Identity Awareness issues using debug commands and logs. Check Point R81.20 Security Administration Guide: This guide covers general troubleshooting and debugging techniques, including the use of pdp debug commands. 16.When a User process or program suddenly crashes, a core dump is often used to examine the problem. Which command is used to enable the core-dumping via GAIA clish? A. set core-dump enable B. set core-dump total C. set user-dump enable D. set core-dump per_process Answer: A Explanation: In Check Point Gaia, you can enable core dumping through the command line interface (clish) using the following command: set core-dump enable This command activates the core dump mechanism, allowing the system to generate core dump files when user processes crash. Remember to save the configuration after enabling core dumps with the command: save config Why other options are incorrect: B. set core-dump total: This command is used to set the total disk space limit for core dump files, not to enable core dumping itself. C. set user-dump enable: There is no such command in Gaia clish for enabling core dumps. D. set core-dump per_process: This command sets the maximum number of core dump files allowed per process, but it doesn't enable core dumping. Check Point Troubleshooting Reference: Check Point R81.20 Security Administration Guide: This guide provides comprehensive information about Gaia clish commands, including those related to system configuration and troubleshooting. Check Point sk92764: This knowledge base article specifically addresses core dump management in Gaia, explaining how to enable and configure core dumps. Enabling core dumps is a crucial step in troubleshooting process crashes as it provides valuable information for analysis and debugging. 17.What is NOT a benefit of the ‘ fw ctl zdebug ’ command? 10 / 10 A. Automatically allocate a 1MB buffer B. Collect debug messages from the kernel C. Cannot be used to debug additional modules D. Clean the buffer Answer: C Explanation: The fw ctl zdebug command is a powerful tool that can be used to collect debug messages from the kernel, clean the buffer, and automatically allocate a 1MB buffer. However, it cannot be used to debug additional modules, such as SecureXL, CoreXL, or VPN. For those modules, other commands or tools are needed, such as fwaccel dbg, fw ctl affinity, or vpn debug. Reference: 2: “ fw ctl zdebug ” - Helpful Command Combinations 3: How to use " fw ctl zdebug" command Troubleshooting Expert R81.1 (CCTE) Course Outline) - Module 4: Debugging Tools and Methods 18.When debugging is enabled on firewall kernel module using the fw ctl debug' command with required options, many debug messages are provided by the kernel that help the administrator to identify Issues. Which of the following is true about these debug messages generated by the kernel module? A. Messages are written to /etc/dmesg file B. Messages are written to a buffer and collected using ‘ fw ctl kdebug C. Messages are written to SFWDIR D. Messages are written to console and also /var/log/messages file Answer: B 19.During firewall kernel debug with fw ctl zdebug you received less information that expected. You noticed that a lot of messages were lost since the time the debug was started. What should you do to resolve this issue? A. Increase debug buffer Use fw ctl debug -buf 32768 B. Redirect debug output to file; Use fw ctl debug -o /debug.elg C. Redirect debug output to file; Use fw ctl zdebug -o /debug.elg D. Increase debug buffer; Use fw ctl zdebug -buf 32768 Answer: A 20.You need to run a kernel debug over a longer period of time as the problem occurs only once or twice a week. Therefore you need to add a timestamp to the kernel debug and write the output to a file but you cant afford to fill up all the remaining disk space and you only have 10 GB free for saving the debugs. What is the correct syntax for this? A. A fw ctl kdebug -T -f -m 10 -s 1000000 -o debugfilename B. fw ctl debug -T -f-m 10 -s 1000000 -o debugfilename C. fw ctl kdebug -T -f -m 10 -s 1000000 > debugfilename D. fw ctl kdebug -T -m 10 -s 1000000 -o debugfilename Answer: A