SNOWPRO ADVANCED CERTIFICATION Exam DEA-C01 Questions V8.02 SnowPro Advanced Certification Topics - SnowPro Advanced Data Engineer Certification Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt 1.Which are the valid options for the validation_mode parameter in the COPY command A. RETURN_<n>_ROWS B. RETURN_ERROR C. RETURN_ERRORS D. RETURN_ALL_ERRORS Answer: A, C, D Explanation VALIDATION_MODE = RETURN_n_ROWS | RETURN_ERRORS | RETURN_ALL_ERRORS String (constant) that instructs the COPY command to validate the data files instead of loading them into the specified table; i.e. the COPY command tests the files for errors but does not load them. The command validates the data to be loaded and returns results based on the validation option specified: Supported Values Notes RETURN_n_ROWS (e.g. RETURN_10_ROWS) Validates the specified number of rows, if no errors are encountered; otherwise, fails at the first error encountered in the rows. RETURN_ERRORS Returns all errors (parsing, conversion, etc.) across all files specified in the COPY statement. RETURN_ALL_ERRORS Returns all errors across all files specified in the COPY statement, including files with errors that were partially loaded during an earlier load because the ON_ERROR copy option was set to CONTINUE during the load. https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#optional- parameters 2.FORMAT_NAME and TYPE are mutually exclusive in the COPY command. A. TRUE B. FALSE Answer: A Explanation FILE_FORMAT = ( FORMAT_NAME = 'file_format_name' ) or FILE_FORMAT = ( TYPE = CSV | JSON | AVRO | ORC | PARQUET | XML [ ... ] ) Specifies the format of the data files to load: FORMAT_NAME = 'file_format_name' Specifies an existing named file format to use for loading data into the table. The named file format determines the format type (CSV, JSON, etc.), as well as any other format options, for the data files. For more information, see CREATE FILE FORMAT. TYPE = CSV | JSON | AVRO | ORC | PARQUET | XML [ ... ] Specifies the type of files to load into the table. If a format type is specified, then Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt additional format-specific options can be specified. For more details, see Format Type Options (in this topic). Note FORMAT_NAME and TYPE are mutually exclusive; specifying both in the same COPY command might result in unexpected behavior. 3.Which of the below mentioned compression techniques are applicable for CSV file format? A. GZIP B. BZ2 C. BROTLI D. ZSTD E. DEFLATE F. RAW_DEFLATE G. LZIP Answer: A, B, C, D, E, F Explanation https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#type-csv Supported Values Notes AUTO Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO. GZIP BZ2 BROTLI Must be specified when loading Brotli-compressed files. ZSTD Zstandard v0.8 (and higher) supported. DEFLATE Deflate-compressed files (with zlib header, RFC1950). RAW_DEFLATE Raw Deflate-compressed files (without header, RFC1951). NONE Data files to load have not been compressed. 4.Snowflake charges a per-byte fee when users transfer data from your snowflake account into cloud storages in another region on the same cloud platform or into cloud storage in another cloud platform A. TRUE B. FALSE Answer: A Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt Explanation https://docs.snowflake.com/en/user-guide/billing-data-transfer.html#understanding- snowflake-data-transfer-billing Cloud providers apply data egress charges in either of the following use cases: Data is transferred from one region to another within the same cloud platform. Data is transferred out of the cloud platform. To recover these expenses, Snowflake charges a per-byte fee when users transfer data from your Snowflake account (hosted on AWS, Google Cloud Platform, or Microsoft Azure) into cloud storage in another region on the same cloud platform, or into cloud storage in another cloud platform. The amount charged per byte depends on the region where your Snowflake account is hosted. For data transfer pricing, see the pricing guide (on the Snowflake website): 5.In which of the below use cases does Snowflake applies data egress charge? A. Unloading data from Snowflake B. Database replication C. External functions D. Loading data into Snowflake Answer: A, B, C Explanation Data Transfer Billing Use Cases Snowflake currently applies data egress charges only in the following use cases: Unloading Data from Snowflake Using COPY INTO <location> to unload data to cloud storage in a region or cloud platform different from where your Snowflake account is hosted. Database Replication Replicating data to a Snowflake account in a region or cloud platform different from where your primary (origin) Snowflake account is hosted. External Functions AWS: Data transfers sent from your Snowflake account are billed at the cross-cloud platform rate regardless of the cloud platform that hosts your Snowflake account or the region in which your account is located. Data sent via API Gateway Private Endpoints incurs PrivateLink charges for both ingress and egress. Azure: Data transfers within the same region are free, and therefore there are no charges for Snowflake to pass on to the account. https://docs.snowflake.com/en/user-guide/billing-data-transfer.html#data-transfer- billing-use-cases 6.Which of the below transformations are supported by Snowflake while loading a Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt table using the COPY statement? A. Column reordering B. Column renaming C. Column omission D. Casts E. Truncating text strings that exceed the target column length Answer: A, C, D, E Explanation Simple Transformations During a Load Snowflake supports transforming data while loading it into a table using the COPY command. Options include: 7. Column reordering 8. Column omission 9. Casts 10. Truncating text strings that exceed the target column length There is no requirement for your data files to have the same number and ordering of columns as your target table. 11.Which of the below function can a task use to see whether a stream contains change data for a table? A. SYSTEM$STREAM_HAS_DATA B. SYSTEM#STREAM_HAS_DATA C. SYSTEM_HAS_STREAM_DATA Answer: A Explanation Whenever a TASK is used to ingest data from stream and then perform a DML operation, it is a best practice to check whether STREAM has data using the SYSTEM$STREAM_HAS_DATA function. I learnt it the hard way. By mistake I created a task in development and forgot to disable it, the STREAM check was not there. The task was running every 10 minutes and consumed all available credits in 2 days. To avoid these situations, in addition to check for the STREAM data, it is also a good practice to setup resource monitors at lower percentages. For example at 30%, 50%, 70% and 90%. --------------------------- Additional Explanation from the snowflake documentation Tasks may optionally use table streams to provide a convenient way to continuously process new or changed data. A task can transform new or changed rows that a stream surfaces. Each time a task is scheduled to run, it can verify whether a stream Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt contains change data for a table (using SYSTEM$STREAM_HAS_DATA) and either consume the change data or skip the current run if no change data exists. 12.A stream contains table data A. True B. False Answer: B Explanation Note that a stream itself does not contain any table data. A stream only stores an offset for the source table and returns CDC records by leveraging the versioning history for the source table. When the first stream for a table is created, a pair of hidden columns are added to the source table and begin storing change tracking metadata. These columns consume a small amount of storage. The CDC records returned when querying a stream rely on a combination of the offset stored in the stream and the change tracking metadata stored in the table. https://docs.snowflake.com/en/user-guide/streams.html#overview-of-table-streams 13.Which of the below SQLs will you run to validate any loads of the pipe within the last hour? A. select * from table(validate_pipe_load( pipe_name=>'data_engineer_pipe', start_time=>dateadd(hour, -1, current_timestamp()))); B. select * from table(pipe_load_status( pipe_name=>'data_engineer_pipe', start_time=>dateadd(hour, -1, current_timestamp()))); C. select * from table(check_pipe_load( pipe_name=>'data_engineer_pipe', start_time=>dateadd(hour, -1, current_timestamp()))); Answer: A Explanation https://docs.snowflake.com/en/sql- reference/functions/validate_pipe_load.html#validate-pipe-load VALIDATE_PIPE_LOAD This table function can be used to validate data files processed by Snowpipe within a specified time range. The function returns details about any errors encountered during an attempted data load into Snowflake tables. 14.For how many days does the COPY_HISTORY retain data loading history A. 10 B. 15 Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt C. 14 Answer: C Explanation https://docs.snowflake.com/en/sql-reference/functions/copy_history.html#copy-history COPY_HISTORY This table function can be used to query Snowflake data loading history along various dimensions within the last 14 days. The function returns load activity for both COPY INTO <table> statements and continuous data loading using Snowpipe. The table function avoids the 10,000 row limitation of the LOAD_HISTORY View. The results can be filtered using SQL predicates. 15.Each micro partition contains between 50 mb and 500 MB of uncompressed data A. TRUE B. FALSE Answer: A Explanation What are Micro-partitions? All data in Snowflake tables is automatically divided into micro-partitions, which are contiguous units of storage. Each micro-partition contains between 50 MB and 500 MB of uncompressed data (note that the actual size in Snowflake is smaller because data is always stored compressed). Groups of rows in tables are mapped into individual micro-partitions, organized in a columnar fashion. This size and structure allows for extremely granular pruning of very large tables, which can be comprised of millions, or even hundreds of millions, of micro-partitions. Snowflake stores metadata about all rows stored in a micro-partition, including: 16. The range of values for each of the columns in the micro-partition. 17. The number of distinct values. 18. Additional properties used for both optimization and efficient query processing. 19.Which of the below are benefits of micro partitioning? A. Micro partitions are derived automatically B. Micro partitions need to be maintained by users C. Micro partitions enables extremely efficient DML and fine-grained pruning for faster queries D. Columns are stored independently within micro-partitions E. Columns are compressed individually within micro-partitions Answer: A, C, D, E Explanation Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt Benefits of Micro-partitioning The benefits of Snowflake’s approach to partitioning table data include: In contrast to traditional static partitioning, Snowflake micro-partitions are derived automatically; they don’t need to be explicitly defined up-front or maintained by users. As the name suggests, micro-partitions are small in size (50 to 500 MB, before compression), which enables extremely efficient DML and fine-grained pruning for faster queries. Micro-partitions can overlap in their range of values, which, combined with their uniformly small size, helps prevent skew. Columns are stored independently within micro-partitions, often referred to as columnar storage. This enables efficient scanning of individual columns; only the columns referenced by a query are scanned. Columns are also compressed individually within micro-partitions. Snowflake automatically determines the most efficient compression algorithm for the columns in each micro-partition. https://docs.sn owflake.com/en/user-guide/tables-clustering-micropartitions.html#benefits-of-micro- partitionin g 20.Snowflake does not prune micro-partitions based on a predicate with a subquery A. TRUE B. FALSE Answer: A Explanation Not all predicate expressions can be used to prune. For example, Snowflake does not prune micro-partitions based on a predicate with a subquery, even if the subquery results in a constant. Question 14: Which of the below micro-partition metadata is maintained by snowflake 21.You have just created a table in snowflake. There are now rows in the table. What will be the clustering depth of the table? A. 0 B. 1 C. 2 D. 3 Answer: A Explanation A table with no micro-partitions (i.e. an unpopulated/empty table) has a clustering depth of 0. 22.Which one of the below is true about clustering depth? A. The smaller the average depth, the better clustered the table is with respect to the Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt specified columns B. The larger the average depth, the better clustered the table is with respect to the specified columns C. The smaller the maximum depth, the better clustered the table is with respect to the specified columns Answer: A Explanation Clustering Depth The clustering depth for a populated table measures the average depth (1 or greater) of the overlapping micro-partitions for specified columns in a table. The smaller the average depth, the better clustered the table is with regards to the specified columns. Clustering depth can be used for a variety of purposes, including: Monitoring the clustering “health” of a large table, particularly over time as DML is performed on the table. Determining whether a large table would benefit from explicitly defining a clustering key. 23.Putting a higher cardinality column before a lower cardinality column will generally reduce the effectiveness of clustering on the later column A. TRUE B. FALSE Answer: A Explanation If you are defining a multi-column clustering key for a table, the order in which the columns are specified in the CLUSTER BY clause is important. As a general rule, Snowflake recommends ordering the columns from lowest cardinality to highest cardinality. Putting a higher cardinality column before a lower cardinality column will generally reduce the effectiveness of clustering on the latter column. 24.An existing clustering key is copied in which of the below scenarios A. CREATE TABLE...CLONE B. CREATE TABLE...LIKE C. CREATE TABLE...AS SELECT Answer: A Explanation https://docs.snowflake.com/en/sql- reference/functions/system_estimate_search_optimization_costs.html#out put BuildCosts This object describes the predicted costs of building the search access path for the table. If search optimization has already been added to the table, this object contains Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt no cost information. StorageCosts This object describes the predicted amount of storage space (in TB) needed for the search access path for the table. If search optimization has already been added to the table, this object shows the current amount of space used by the search access path. Benefit This object does not contain any cost information at this time. MaintenanceCosts This object describes the predicted costs of maintaining the search access path for the table. If this table has been created recently, this object does not contain any cost information. 25.Let's say you have a schema name MY_SCHEMA. This schema contains two permanent tables as shown below CREATE TABLE MY_TABLE_A (C1 INT) DATA_RETENTION_TIME_IN_DAYS = 10; CREATE TABLE MY_TABLE_B (C1 INT); What will be the impact of running the following command? ALTER SCHEMA MY_SCHEMA SET DATA_RETENTION_TIME_IN_DAYS = 20; A. Data retention time cannot be set at SCHEMA level, hence it will fail B. The retention time on MY_TABLE_A does not change; MY_TABLE_B will be set to 20 days C. The retention time on both the tables will be set to 20 days D. The retention time will not change for both tables Answer: B Explanation https://docs.snowflake.com/en/user-guide/data-time-travel.html#changing-the-data- retention-period-for-an-object Changing the retention period for your account or individual objects changes the value for all lower-level objects that do not have a retention period explicitly set. For example: If you change the retention period at the account level, all databases, schemas, and tables that do not have an explicit retention period automatically inherit the new retention period. If you change the retention period at the schema level, all tables in the schema that do not have an explicit retention period inherit the new retention period. Keep this in mind when changing the retention period for your account or any objects in your account because the change might have Time Travel consequences that you did not anticipate or intend. In particular, we do not recommend changing the retention period to 0 at the account level. 26.Which of the below standard API objects are supported by the Snowflake Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt connector for Python A. Connection B. Cursor C. SnowflakeConnection D. SnowflakeCursor Answer: A, B Explanation https://docs.snowflake.com/en/user-guide/python-connector.html#snowflake- connector-for-python The connector supports developing applications using the Python Database API v2 specification (PEP-249), including using the following standard API objects: Connection objects for connecting to Snowflake. Cursor objects for executing DDL/DML statements and queries. SnowSQL, the command line client provided by Snowflake, is an example of an application developed using the connector. 27.The snowflake connector for Python uses a temporary directory to store data for loading and unloading(PUT,GET) as well as other types of temporary data. If the temporary directory is not explicitly set, what does the connector use? A. System's default temporary directory(i.e; /tmp, c:\TEMP) B. System creates a temporary directory C. The PUT and GET commands will fail Answer: A Explanation https://docs.snowflake.com/en/user-guide/python-connector- install.html#step-3-specify-a-temporary-directory The Snowflake Connector for Python uses a temporary directory to store data for loading and unloading (PUT, GET), as well as other types of temporary data. The temporary directory can be explicitly specified by setting the TMPDIR, TEMP or TMP environmentvariables, otherwise the operating system’s default temporary directory (i.e. /tmp, C:\temp) is used. If the system’s default temporary directory volume is not large enough for the data being processed, you should specify a different directory using any of the supported environment variables. For example, from a terminal window, execute the following command: export TMPDIR=/large_tmp_volume 28.In order to improve query perfromance, you can bypass data conversions from the Snowflake internal data type to the native Python data type. Which class in the snowflake.connector.converter_null module do you use for this feature? Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt A. SnowflakeNoConverterToPython B. PythonNoConverterToSnowflake C. ByPassDataConversion Answer: A Explanation https://docs.snowflake.com/en/user-guide/python-connector-example.html#improving- query-performance-by-bypassing-data-conversion To improve query performance, use the SnowflakeNoConverterToPython class in the snowflake.connector.converter_null module to bypass data conversions from the Snowflake internal data type to the native Python data type, e.g.: from snowflake.connector.converter_null import SnowflakeNoConverterToPythoncon = snowflake.connector.connect( ... converter_class=SnowflakeNoConverterToPython ) for rec in con.cursor().execute("SELECT * FROM large_table"): # rec includes raw Snowflake data As a result, all data is represented in string form such that the application is responsible for converting it to the native Python data types. For example, TIMESTAMP_NTZ and TIMESTAMP_LTZ data are the epoch time represented in string form, and TIMESTAMP_TZ data is the epoch time followed by a space followed by the offset to UTC in minutes represented in string form. No impact is made to binding data; Python native data can still be bound for updates. 29.It is a best practice to avoid binding data using Python's formatting function due to the risk of SQL injection. A. TRUE B. FALSE Answer: A Explanation Avoid SQL Injection Attacks Avoid binding data using Python’s formatting function because you risk SQL injection. For example: # Binding data (UNSAFE EXAMPLE) con.cursor().execute( "INSERT INTO testtable(col1, col2)" "VALUES(%(col1)d, '%(col2)s')" % { 'col1': 789, 'col2': 'test string3' }) # Binding data (UNSAFE EXAMPLE) con.cursor().execute( Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt "INSERT INTO testtable(col1, col2) " "VALUES(%d, '%s')" % ( 789, 'test string3' )) # Binding data (UNSAFE EXAMPLE) con.cursor().execute( "INSERT INTO testtable(col1, col2) " "VALUES({col1}, '{col2}')".format( col1=789, col2='test string3') ) Instead, store the values in variables, check those values (for example, by looking for suspicious semicolons inside strings), and then bind the parameters using qmark or numeric binding style. 30.Which system table will you use to get the total credit consumption over a specific time period? A. WAREHOUSE_METERING_HISTORY B. WAREHOUSE_CREDIT_USAGE_HISTORY C. WAREHOUSE_USAGE_HISTORY Answer: A Explanation The WAREHOUSE_METERING_HISTORY table in the ACCOUNT_USAGE Schema can be used to get the desired information. Run the below query to try this out. SELECT WAREHOUSE_NAME, SUM(CREDITS_USED_COMPUTE) AS CREDITS_USED_COMPUTE_SUM FROM ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY GROUP BY 1 ORDER BY 2 DESC; 31.You are asked to find out the average number of queries run on an hourly basis to better understand query activity. Which of teh system tables will you use to create your query to get this information? A. QUERY_HISTORY B. QUERY_LOG C. QUERY_MONITOR Answer: A Explanation QUERY_HISTORY table in the ACCOUNT_USAGE schema will give you this information. Please try the below query Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt SELECT DATE_TRUNC('HOUR', START_TIME) AS QUERY_START_HOUR, WAREHOUSE_NAME, COUNT(*) AS NUM_QUERIES FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY WHERE START_TIME >= DATEADD(DAY, -7, CURRENT_TIMESTAMP()) GROUP BY 1,2 ORDER BY 1 DESC, 2; 32.Which of the below joins are supported by Snowflake? A. INNER JOIN B. OUTER JOIN C. CROSS JOIN D. NATURAL JOIN E. SIDE JOIN Answer: A, B, C, D Explanation https://docs.snowflake.com/en/user-guide/querying-joins.html#types-of-joins Snowflake supports the following types of joins: Inner join. Outer join. Cross join. Natural join. 33.Which snowflake parameter limits the number of iterations A. MAX_RECURSIONS B. MAX_ITERATIONS C. MAX_LOOP Answer: A Explanation Recursive CTE Considerations Potential for Infinite Loops In theory, constructing a recursive CTE incorrectly can cause an infinite loop. In practice, Snowflake prevents this by limiting the number of iterations that the recursive clause will perform in a single query. The MAX_RECURSIONS parameter limits the number of iterations. To change MAX_RECURSIONS for your account, please contact Snowflake Support. 34.What are the two techniques available to query hierarchical data? A. RECURSIVE CTEs B. CONNECT BY Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt C. CONNECT WITH D. RECURSION Explanation https://docs.snowflake.com/en/user-guide/queries-hierarchical.html#using-connect-by- or-recursive-ctes-t o-query-hierarchical-data Snowflake provides two ways to query hierarchical data in which the number of levels is not known in advance: Recursive CTEs (common table expressions). CONNECT BY clauses. A recursive CTE allows you to create a WITH clause that can refer to itself. This lets you iterate through each level of your hierarchy and accumulate results. A CONNECT BY clause allows you to create a type of JOIN operation that processes the hierarchy one level at a time, and allows each level to refer to data in the prior level. 35.Which of the below functions are recommended to be used to understand the clustering ratio of a table? A. SYSTEM$CLUSTERING_RATIO B. SYSTEM$CLUSTERING_DEPTH C. SYSTEM$CLUSTERING_INFORMATION Answer: B, C Explanation https://docs.snowflake.com/en/sql-reference/functions/system_clustering_ratio.html 36.You have an employee table and you want to view the explain results in a tabular form for the below query SELECT * FROM EMPLOYEE; Which of the options below can you use to do the same? A. SELECT * FROM TABLE( EXPLAIN_JSON( SYSTEM$EXPLAIN_PLAN_JSON( 'SELECT * FROM EMPLOYEES') ) ); B. EXPLAIN USING TABULAR SELECT * FROM EMPLOYEES; C. EXPLAIN AND CONVERT TO TABULAR SELECT * FROM EMPLOYEES; Answer: A, B Explanation https://docs.snowflake.com/en/sql-reference/functions/explain_json.html#explain-json EXPLAIN_JSON This function converts an EXPLAIN plan from JSON to a table. The output is the Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt same as the output of the command EXPLAIN USING TABULAR <statement>. See also: SYSTEM$EXPLAIN_PLAN_JSON , SYSTEM$EXPLAIN_JSON_TO_TEXT 37.Which of the below statements are true? A. ACCOUNT USAGE includes dropped objects but INFORMATION SCHEMA does not B. INFORMATION SCHEMA includes dropped objects but ACCOUNT USAGE does not C. BOTH includes dropped object D. BOTH does not include dropped object Answer: A Explanation https://docs.snowflake.com/en/sql-reference/account-usage.html#differences- between-account-usage-and-information-schema Dropped Object Records Account usage views include records for all objects that have been dropped. An additional DELETED column displays the timestamp when the object was dropped. In addition, because objects can be dropped and recreated with the same name, to differentiate between objects records that have the same name, the account usage views include ID columns, where appropriate, that display the internal IDs generated and assigned to each record by the system. 38.You have query that is spilling to remote storage. The query profile shows the below metrics 39. Spilling 40. Bytes spilled to local storage 41.55 GB 41. Bytes spilled to remote storage 8.16 GB Which of the below techniques will you be using to decrease remote spilling? A. Review the query for optimization, especially if it is a new query B. Reduce the amount of data processed by trying to improve partition pruning C. Decrease the number of parallel queries running in the warehouse D. Try to split the processing into several steps. For example by replacing CTEs with temporary tables E. Use a larger warehouse F. Use a multi cluster warehouse Answer: A, B, C, D, E Explanation Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt https://community.snowflake.com/s/article/Performance-impact-from-local-and-remote- disk-spilling#:~:text=What is disk spilling%3F,and then to remote storage.&text=A query spilling bytes to,notice even further performance degradation. 42.Time travel cannot be disabled for an account, but it can be disabled for individual databases, schemas and tables by specifying DATA_RETENTION_TIME_IN_DAYS with a value of 0 for the object A. TRUE B. FALSE Answer: A Explanation https://docs.snowflake.com/en/user-guide/data-time-travel.html#enabling-and- disabling-time-travel Enabling and Disabling Time Travel No tasks are required to enable Time Travel. It is automatically enabled with the standard, 1-day retention period. However, you may wish to upgrade to Snowflake Enterprise Edition to enable configuring longer data retention periods of up to 90 days for databases, schemas, and tables. Note that extended data retention requires additional storage which will be reflected in your monthly storage charges. For more information about storage charges, see Storage Costs for Time Travel and Fail-safe. Time Travel cannot be disabled for an account; however, it can be disabled for individual databases, schemas, and tables by specifying DATA_RETENTION_TIME_IN_DAYS with a value of 0 for the object. Also, users with the ACCOUNTADMIN role can set DATA_RETENTION_TIME_IN_DAYS to 0 at the account level, which means that all databases (and subsequently all schemas and tables) created in the account have no retention period by default; however, this default can be overridden at any time for any database, schema, or table. 43.Lets say you created a schema and a table as below 44. CREATE OR REPLACE SCHEMA TIME_TRAVEL_SCHEMA DATA_RETENTION_TIME_IN_DAYS =10; 45. CREATE OR REPLACE TABLE TIME_TRAVEL_SCHEMA.TIME_TRAVEL_TABLE (ID NUMBER) DATA_RETENTION_TIME_IN_DAYS =20; Later you dropped the schema. In this scenario what data retention value will be honored for the table, if we need to retrieve the table data A. 10 B. 20 Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt C. 30 Answer: A Explanation https://docs.snowflake.com/en/user-guide/data-time-travel.html#dropped-containers- and-object-retention-inh eritance Dropped Containers and Object Retention Inheritance Currently, when a database is dropped, the data retention period for child schemas or tables, if explicitly set to be different from the retention of the database, is not honored. The child schemas or tables are retained for the same period of time as the database. Similarly, when a schema is dropped, the data retention period for child tables, if explicitly set to be different from the retention of the schema, is not honored. The child tables are retained for the same period of time as the schema. To honor the data retention period for these child objects (schemas or tables), drop them explicitly before you drop the database or schema. 46.If you run the below operations, what will be the output from the last SELECT statement? 47. CREATE OR REPLACE TABLE EMPLOYEE(ID VARCHAR, NAME VARCHAR); 48. INSERT INTO EMPLOYEE VALUES('1','MOHAN'); 49. CREATE or replace PROCEDURE sp1() 50. RETURNS string 51. LANGUAGE JAVASCRIPT 52. EXECUTE AS CALLER 53. AS 54. $$ 55. $$ 56. ; 57. CREATE or replace PROCEDURE sp2() 58. RETURNS string Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt 59. LANGUAGE JAVASCRIPT 60. EXECUTE AS CALLER 61. AS 62. $$ 63. $$ 64. ; call sp1(); SELECT * FROM EMPLOYEE ORDER BY ID; A. 1 MOHAN 2 RON B. 1 MOHAN 2 RON 3 RANJAN C. 1 MOHAN 3 RANJAN D. 1 MOHAN Answer: A Explanation https://docs.snowflake.com/en/sql-reference/transactions.html#scoped-transactions Scoped Transactions A stored procedure that contains a transaction can be called from within another transaction. For example, a transaction inside a stored procedure can include a call to another stored procedure that contains a transaction. Snowflake does not treat the inner transaction as nested; instead, the inner transaction is a separate transaction. Snowflake calls these “autonomous scoped transactions” (or simply “scoped transactions”). The starting point and ending point of each scoped transaction determine which statements are included in the transaction. The start and end can be explicit or implicit. Each SQL statement is part of only one transaction. An enclosing ROLLBACK or COMMIT does not undo an enclosed COMMIT or ROLLBACK. 65.What will happen if you try to create a materialized view as below 66. create or replace materialized view employee_view as 67. select id, role, name Latest Snowflake DEA-C01 Exam Questions PDF - Pass On The First Attempt 68. from employee where role = current_role(); A. Only users with the desired role will be able to retrieve results from the view B. Only users with the correct role assigned and SYSADMIN will be able to retrieve the results from the view C. The operation will error out since the function used is a non-deterministic one. Answer: C Explanation It will error out with the below message SQL compilation error: error line 3 at position 33 Invalid materialized view definition. non deterministic function 'CURRENT_ROLE' not allowed in view definition. https://docs.snowflake.com/en/user-guide/views-materialized.html#limitations-on- creating-materialized-views Functions used in a materialized view must be deterministic. For example, using CURRENT_TIME or CURRENT_TIMESTAMP is not permitted. 69.What will happen when you try to create a materialized view as below 70. create OR REPLACE materialized view mv2 as 71. select A. The operation will be successful and the materialized view will be created B. Materialized view always needs to be a secure view, hence this will fail C. Aggregate functions used in complex expressions can only be used in the outer- most level of a query, not in a subquery or an in-line view, hence this operation will fail. Answer: C Explanation Aggregate functions used in complex expressions (e.g. (sum(salary)/10)) can only be used in the outer-most level of a query, not in a subquery or an in-line view. For example, the following is allowed: create materialized view mv1 as select sum(x) + 100 from t; The following is not allowed: create materialized view mv2 as select y + 10 from ( select