MULESOFT CERTIFIED ARCHITECT Exam MCIA-Level 1 Questions V12.02 MuleSoft Certified Architect Topics - MuleSoft Certified Integration Architect - Level 1 1.An organization uses Mule runtimes which are managed by Anypoint Platform - Private Cloud Edition. What MuleSoft component is responsible for feeding analytics data to non-MuleSoft analytics platforms? A. Anypoint Exchange B. The Mule runtimes C. Anypoint API Manager D. Anypoint Runtime Manager Answer: D Explanation: Correct answer is Anypoint Runtime Manager MuleSoft Anypoint Runtime Manager (ARM) provides connectivity to Mule Runtime engines deployed across your organization to provide centralized management, ly monitoring and analytics reporting. However, most enterprise customers find it th oo necessary for these on-premises runtimes to integrate with their existing non MuleSoft m S analytics / monitoring systems such as Splunk and ELK to support a single pane of m xa glass view across the infrastructure. E 1 * You can configure the Runtime Manager agent to export data to external analytics el ev tools. Using either the Runtime Manager cloud console or Anypoint Platform Private -L Cloud Edition, you can: IA C M --> Send Mule event notifications, including flow executions and exceptions, to Splunk s as or ELK. -P --> Send API Analytics to Splunk or ELK. Sending data to third-party tools is not ns io supported for applications deployed on CloudHub. st You can use the CloudHub custom log appender to integrate with your logging ue Q system. Explanation: F D P Reference: https://docs.mulesoft.com/runtime-manager/ https://docs.mulesoft.com/rel 1 el ease-notes/runtime-manager-agent/runtime-manager-agent-release-notes ev -L IA C M t of eS ul M ly th oo m S m xa E 1 el ev -L IA Diagram C M Description automatically generated s as Additional Info: -P ns It can be achieved in 3 steps: io st 1) register an agent to a runtime manager, ue 2) configure a gateway to enable API analytics to be sent to non MuleSoft analytics Q F platform (Splunk for ex.) C as highlighted in the following diagram and D P 1 3) setup dashboards. el ev -L IA C M t of eS ul M Diagram Description automatically generated 2.An organization will deploy Mule applications to Cloudhub, Business requirements mandate that all application logs be stored ONLY in an external splunk consolidated logging service and NOT in Cloudhub. In order to most easily store Mule application logs ONLY in Splunk, how must Mule application logging be configured in Runtime Manager, and where should the log4j2 splunk appender be defined? A. Keep the default logging configuration in RuntimeManager Define the splunk appender in ONE global log4j.xml file that is uploaded once to ly Runtime Manager to support at Mule application deployments. th oo B. Disable Cloudhub logging in Runtime Manager m S Define the splunk appender in EACH Mule application’s log4j2.xml file m xa C. Disable Cloudhub logging in Runtime Manager E 1 Define the splunk appender in ONE global log4j.xml file that is uploaded once to el ev Runtime Manger to support at Mule application deployments. -L D. Keep the default logging configuration in Runtime Manager Define the Splunk IA C M appender in EACH Mule application log4j2.xml file s as Answer: B -P Explanation: ns io By default, CloudHub replaces a Mule application's log4j2.xml file with a CloudHub st log4j2.xml file. In CloudHub, you can disable the CloudHub provided Mule application ue Q log4j2 file. This allows integrating Mule application logs with custom or third-party log F D P management systems 1 el ev -L IA 3.In Anypoint Platform, a company wants to configure multiple identity providers C M (IdPs) for multiple lines of business (LOBs). Multiple business groups, teams, and t of eS environments have been defined for these LOBs. ul What Anypoint Platform feature can use multiple IdPs across the company’s business M groups, teams, and environments? A. MuleSoft-hosted (CloudHub) dedicated load balancers B. Client (application) management C. Virtual private clouds D. Permissions Answer: A Explanation: To use a dedicated load balancer in your environment, you must first create an Anypoint VPC. Because you can associate multiple environments with the same Anypoint VPC, you can use the same dedicated load balancer for your different environments. Reference: https://docs.mulesoft.com/runtime-manager/cloudhub-dedicated-load- balancer 4.As a part of business requirement, old CRM system needs to be integrated using Mule application. CRM system is capable of exchanging data only via SOAP/HTTP protocol. As an integration architect who follows API led approach, what is the the below step you will perform so that you can share document with CRM team? A. Create RAML specification using Design Center B. Create SOAP API specification using Design Center C. Create WSDL specification using text editor D. Create WSDL specification using Design Center ly Answer: C th oo Explanation: m S Correct answer is Create WSDL specification using text editor SOAP services are m xa specified using WSDL. A client program connecting to a web service can read the E 1 WSDL to determine what functions are available on the server. We can not create el ev WSDL specification in Design Center. We need to use external text editor to create -L WSDL. IA C M s as -P 5.A retailer is designing a data exchange interface to be used by its suppliers. The ns io interface must support secure communication over the public internet. The interface st must also work with a wide variety of programming languages and IT systems used ue Q by suppliers. F D P What are suitable interface technologies for this data exchange that are secure, cross- 1 el platform, and internet friendly, assuming that Anypoint Connectors exist for these ev interface technologies? -L IA A. EDJFACT XML over SFTP JSON/REST over HTTPS C M B. SOAP over HTTPS HOP over TLS gRPC over HTTPS t of eS C. XML over ActiveMQ XML over SFTP XML/REST over HTTPS ul D. CSV over FTP YAML over TLS JSON over HTTPS M Answer: D Explanation: * As per definition of API by Mulesoft, it is Application Programming Interface using HTTP-based protocols. Non-HTTP-based programmatic interfaces are not APIs. * HTTP-based programmatic interfaces are APIs even if they don’t use REST or JSON. Hence implementation based on Java RMI, CORBA/IIOP, raw TCP/IP interfaces are not API's as they are not using HTTP. * One more thing to note is FTP was not built to be secure. It is generally considered to be an insecure protocol because it relies on clear-text usernames and passwords for authentication and does not use encryption. * Data sent via FTP is vulnerable to sniffing, spoofing, and brute force attacks, among other basic attack methods. Considering the above points only correct option is -XML over ActiveMQ - XML over SFTP - XML/REST over HTTPS 6.An integration Mule application is deployed to a customer-hosted multi-node Mule 4 runtime duster. The Mule application uses a Listener operation of a JMS connector to receive incoming messages from a JMS queue. How are the messages consumed by the Mule application? A. Depending on the JMS provider's configuration, either all messages are consumed by ONLY the primary cluster node or else ALL messages are consumed by ALL ly cluster nodes th oo B. Regardless of the Listener operation configuration, all messages are consumed by m S ALL cluster nodes m xa C. Depending on the Listener operation configuration, either all messages are E 1 consumed by ONLY the primary cluster node or else EACH message is consumed by el ev ANY ONE cluster node -L D. Regardless of the Listener operation configuration, all messages are consumed by IA C M ONLY the primary cluster node s as Answer: C -P Explanation: ns io Correct answer is Depending on the Listener operation configuration, either all st messages are consumed by ONLY the primary cluster node or else EACH message ue Q is consumed by ANY ONE cluster node F D P For applications running in clusters, you have to keep in mind the concept of primary 1 el node and how the connector will behave. When running in a cluster, the JMS listener ev default behavior will be to receive messages only in the primary node, no matter what -L IA kind of destination you are consuming from. In case of consuming messages from a C M Queue, you’ll want to change this configuration to receive messages in all the nodes t of eS of the cluster, not just the primary. ul This can be done with the primaryNodeOnly parameter: M <jms:listener config-ref="config" destination="${inputQueue}" primaryNodeOnly="false"/> 7.What Anypoint Connectors support transactions? A. Database, JMS, VM B. Database, 3MS, HTTP C. Database, JMS, VM, SFTP D. Database, VM, File Answer: A Explanation: Below Anypoint Connectors support transactions JMS C Publish C Consume VM C Publish C Consume Database C All operations 8.A Mule application contains a Batch Job scope with several Batch Step scopes. The Batch Job scope is configured with a batch block size of 25. A payload with 4,000 records is received by the Batch Job scope. When there are no errors, how does the Batch Job scope process records within and between the Batch Step scopes? A. The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the ly received Mule event th oo For each Batch Step scope, all 25 records within a block are processed in parallel m S All the records in a block must be completed before the block of 25 records is m xa available to the next Batch Step scope E 1 B. The Batch Job scope processes each record block sequentially, one at a time el ev Each Batch Step scope is invoked with one record in the payload of the received Mule -L event IA C M For each Batch Step scope, all 25 records within a block are processed sequentially, s as one at a time -P All 4000 records must be completed before the blocks of records are available to the ns io next Batch Step scope st C. The Batch Job scope processes multiple record blocks in parallel, and a block of ue Q 25 records can jump ahead to the next Batch Step scope over an earlier block of F D P records Each Batch Step scope is invoked with one record in the payload of the 1 el received Mule event ev For each Batch Step scope, all 25 records within a block are processed sequentially, -L IA one record at a time C M All the records in a block must be completed before the block of 25 records is t of eS available to the next Batch Step scope ul D. The Batch Job scope processes multiple record blocks in parallel M Each Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope Answer: A Explanation: Reference: https://docs.mulesoft.com/mule-runtime/4.4/batch-processing-concept 9.Mule application is deployed to Customer Hosted Runtime. Asynchronous logging was implemented to improved throughput of the system. But it was observed over the period of time that few of the important exception log messages which were used to rollback transactions are not working as expected causing huge loss to the Organization. Organization wants to avoid these losses. Application also has constraints due to which they cant compromise on throughput much. What is the possible option in this case? A. Logging needs to be changed from asynchronous to synchronous B. External log appender needs to be used in this case C. Persistent memory storage should be used in such scenarios D. Mixed configuration of asynchronous or synchronous loggers should be used to log exceptions via synchronous way ly Answer: D th oo Explanation: m S Correct approach is to use Mixed configuration of asynchronous or synchronous m xa loggers shoud be used to log exceptions via synchronous way Asynchronous logging E 1 poses a performance-reliability trade-off. You may lose some messages if Mule el ev crashes before the logging buffers flush to the disk. In this case, consider that you can -L have a mixed configuration of asynchronous or synchronous loggers in your app. Best IA C M practice is to use asynchronous logging over synchronous with a minimum logging s as level of WARN for a production application. In some cases, enable INFO logging level -P when you need to confirm events such as successful policy installation or to perform ns io troubleshooting. Configure your logging strategy by editing your application’s st src/main/resources/log4j2.xml file ue Q F D P 1 el 10.What is a key difference between synchronous and asynchronous logging from ev Mule applications? -L IA A. Synchronous logging writes log messages in a single logging thread but does not C M block the Mule event being processed by the next event processor t of eS B. Asynchronous logging can improve Mule event processing throughput while also ul reducing the processing time for each Mule event M C. Asynchronous logging produces more reliable audit trails with more accurate timestamps D. Synchronous logging within an ongoing transaction writes log messages in the same thread that processes the current Mule event Answer: B Explanation: Types of logging: A) Synchronous: The execution of thread that is processing messages is interrupted to wait for the log message to be fully handled before it can continue. The execution of the thread that is processing your message is interrupted to wait for the log message to be fully output before it can continue Performance degrades because of synchronous logging Used when the log is used as an audit trail or when logging ERROR/CRITICAL messages If the logger fails to write to disk, the exception would raise on the same thread that's currently processing the Mule event. If logging is critical for you, then you can rollback the transaction. ly th oo m S m xa E 1 el ev -L IA C M s as -P ns io st ue Q Chart, diagram F D Description automatically generated P 1 el ev -L IA C M t of eS ul M ly th oo m S m xa E 1 el Chart, diagram, box and whisker chart ev -L Description automatically generated IA C B) Asynchronous: M s The logging operation occurs in a separate thread, so the actual processing of your as -P message won’t be delayed to wait for the logging to complete ns Substantial improvement in throughput and latency of message processing io st Mule runtime engine (Mule) 4 uses Log4j 2 asynchronous logging by default ue The disadvantage of asynchronous logging is error handling. Q F D If the logger fails to write to disk, the thread doing the processing won't be aware of P 1 any issues writing to the disk, so you won't be able to rollback anything. Because the el ev actual writing of the log gets differed, there's a chance that log messages might never -L IA make it to disk and get lost, if Mule were to crash before the buffers are flushed. C M ------------------------------------------------------------------------------------------------------------------ t of So Correct answer is: Asynchronous logging can improve Mule event processing eS ul throughput while also reducing the processing time for each Mule event M 11.A Mule application uses the Database connector. What condition can the Mule application automatically adjust to or recover from without needing to restart or redeploy the Mule application? A. One of the stored procedures being called by the Mule application has been renamed B. The database server was unavailable for four hours due to a major outage but is now fully operational again C. The credentials for accessing the database have been updated and the previous credentials are no longer valid D. The database server has been updated and hence the database driver library/JAR needs a minor version upgrade Answer: B Explanation: * Any change in the application will require a restart except when the issue outside the app. For below situations, you would need to redeploy the code after doing necessary changes -- One of the stored procedures being called by the Mule application has been renamed. In this case, in the Mule application you will have to do changes to accommodate the new stored procedure name. -- Required redesign of Mule applications to follow microservice architecture principles. As code is changed, deployment is must -- If the credentials changed and you need to update the connector or the properties. ly -- The credentials for accessing the database have been updated and the previous th oo credentials are no longer valid. In this situation you need to restart or redeploy m S depending m xa on how credentials are configured in Mule application. E 1 * So Correct answer is The database server was unavailable for four hours due to a el ev major outage but is now fully operational again as this is the only external issue to -L application. IA C M s as -P 12.An organization has defined a common object model in Java to mediate the ns io communication between different Mule applications in a consistent way. A Mule st application is being built to use this common object model to process responses from ue Q a SOAP API and a REST API and then write the processed results to an order F D P management system. 1 el The developers want Anypoint Studio to utilize these common objects to assist in ev creating mappings for various transformation steps in the Mule application. -L IA What is the most idiomatic (used for its intended purpose) and performant way to C M utilize these common objects to map between the inbound and outbound systems in t of eS the Mule application? ul A. Use JAXB (XML) and Jackson (JSON) data bindings M B. Use the WSS module C. Use the Java module D. Use the Transform Message component Answer: A Explanation: Reference: https://docs.mulesoft.com/mule-runtime/3.9/understanding-mule- configuration 13.An organization currently uses a multi-node Mule runtime deployment model within their datacenter, so each Mule runtime hosts several Mule applications. The organization is planning to transition to a deployment model based on Docker containers in a Kubernetes cluster. The organization has already created a standard Docker image containing a Mule runtime and all required dependencies (including a JVM), but excluding the Mule application itself. What is an expected outcome of this transition to container-based Mule application deployments? A. Required redesign of Mule applications to follow microservice architecture principles B. Required migration to the Docker and Kubernetes-based Anypoint Platform - Private Cloud Edition C. Required change to the URL endpoints used by clients to send requests to the Mule applications ly D. Guaranteed consistency of execution environments across all deployments of a th oo Mule application m S Answer: A m xa Explanation: E 1 * Organization can continue using existing load balancer even if backend application el ev changes are there. So option A is ruled out. -L * As Mule runtime is within their datacenter, this model is RTF and not PCE. So option IA C M C is ruled out. s as Mule runtime deployment model within their datacenter, so each Mule runtime hosts -P several Mule applications -- This mean PCE or Hybird not RTF - Also mentioned in ns io Question is that - Mule runtime is hosting several Mule Application, so that also rules st out RTF and as for hosting multiple Application it will have Domain project which need ue Q redesign to make it microservice architecture F D P --------------------------------------------------------------------------------------------------------------- 1 el Correct answer: Required redesign of Mule applications to follow microservice ev architecture principles -L IA C M t of eS 14.What limits if a particular Anypoint Platform user can discover an asset in Anypoint ul Exchange? M A. Design Center and RAML were both used to create the asset B. The existence of a public Anypoint Exchange portal to which the asset has been published C. The type of the asset in Anypoint Exchange D. The business groups to which the user belongs Answer: D Explanation: * "The existence of a public Anypoint Exchange portal to which the asset has been published" - question does not mention anything about the public portal. Beside the public portal is open to the internet, to anyone. * If you cannot find an asset in the current business group scopes, search in other scopes. In the left navigation bar click All assets (assets provided by MuleSoft and your own master organization), Provided by MuleSoft, or a business group scope. User belonging to one Business Group can see assets related to his group only Reference: https://docs.mulesoft.com/exchange/to-find-info https://docs.mulesoft.com/exchange/asset-detailsCorrect answer is The business groups to which the user belongs 15.An organization is evaluating using the CloudHub shared Load Balancer (SLB) vs creating a CloudHub dedicated load balancer (DLB). They are evaluating how this ly choice affects the various types of certificates used by CloudHub deployed Mule th oo applications, including MuleSoft-provided, customer-provided, or Mule application- m S provided certificates. m xa What type of restrictions exist on the types of certificates for the service that can be E 1 exposed by the CloudHub Shared Load Balancer (SLB) to external web clients over el ev the public internet? -L A. Underlying Mule applications need to implement own certificates IA C M B. Only MuleSoft provided certificates can be used for server side certificate s as C. Only self signed certificates can be used -P D. All certificates which can be used in shared load balancer need to get approved by ns io raising support ticket st Answer: B ue Q Explanation: F D P Correct answer is Only MuleSoft provided certificates can be used for server side 1 el certificate ev * The CloudHub Shared Load Balancer terminates TLS connections and uses its own -L IA server-side certificate. C M * You would need to use dedicated load balancer which can enable you to define SSL t of eS configurations to provide custom certificates and optionally enforce two-way SSL ul client authentication. M * To use a dedicated load balancer in your environment, you must first create an Anypoint VPC. Because you can associate multiple environments with the same Anypoint VPC, you can use the same dedicated load balancer for your different environments. Additional Info on SLB Vs DLB: ly th oo m S m xa E 1 el ev -L IA C M s as -P ns io st ue Q F D P 1 el ev -L IA C M t of eS ul M Table Description automatically generated 16.What is true about the network connections when a Mule application uses a JMS connector to interact with a JMS provider (message broker)? A. To complete sending a JMS message, the JMS connector must establish a network connection with the JMS message recipient B. To receive messages into the Mule application, the JMS provider initiates a network connection to the JMS connector and pushes messages along this connection C. The JMS connector supports both sending and receiving of JMS messages over the protocol determined by the JMS provider D. The AMQP protocol can be used by the JMS connector to portably establish connections to various types of JMS providers Answer: C Explanation: * To send message or receive JMS (Java Message Service) message no separate network connection need to be established. So option A, C and D are ruled out. Correct answer: The JMS connector supports both sending and receiving of JMS ly messages over the protocol determined by the JMS provider. th oo * JMS Connector enables sending and receiving messages to queues and topics for m S any message service that implements the JMS specification. m xa * JMS is a widely used API for message-oriented middleware. E 1 * It enables the communication between different components of a distributed el ev application to be loosely coupled, reliable, and asynchronous. -L MuleSoft Doc IA C M Reference: https://docs.mulesoft.com/jms-connector/1.7/ s as -P ns io st ue Q F D P 1 el ev -L IA C M t of eS ul M Diagram, text Description automatically generated 17.A Mule application is being designed to do the following: Step 1: Read a SalesOrder message from a JMS queue, where each SalesOrder consists of a header and a list of SalesOrderLineltems. Step 2: Insert the SalesOrder header and each SalesOrderLineltem into different tables in an RDBMS. Step 3: Insert the SalesOrder header and the sum of the prices of all its SalesOrderLineltems into a table In a different RDBMS. No SalesOrder message can be lost and the consistency of all SalesOrder-related information in both RDBMSs must be ensured at all times. What design choice (including choice of transactions) and order of steps addresses these requirements? A. 1) Read the JMS message (NOT in an XA transaction) 2) Perform BOTH DB inserts in ONE DB transaction 3) Acknowledge the JMS message B. 1) Read the JMS message (NOT in an XA transaction) 2) Perform EACH DB insert in a SEPARATE DB transaction ly 3) Acknowledge the JMS message th oo C. 1) Read the JMS message in an XA transaction m S 2) In the SAME XA transaction, perform BOTH DB inserts but do NOT acknowledge m xa the JMS message E 1 D. 1) Read and acknowledge the JMS message (NOT in an XA transaction) el ev 2) In a NEW XA transaction, perform BOTH DB inserts-L Answer: A IA C M Explanation: s as Option A says "Perform EACH DB insert in a SEPARATE DB transaction". In this -P case if first DB insert is successful and second one fails then first insert won't be ns io rolled back causing inconsistency. This option is ruled out. st Option D says Perform BOTH DB inserts in ONE DB transaction. ue Q Rule of thumb is when one or more DB connections are required we must use XA F D P transaction as local transactions support only one resource. So this option is also 1 el ruled out. Option B acknowledges the before DB processing, so message is removed ev from the queue. In case of system failure at later point, message can't be retrieved. -L IA Option C is Valid: Though it says "do not ack JMS message", message will be auto C M acknowledged at the end of transaction. Here is how we can ensure all components t of eS are part of XA transaction: https://docs.mulesoft.com/jms-connector/1.7/jms- ul transactions Additional Information about transactions: M XA Transactions - You can use an XA transaction to group together a series of operations from multiple transactional resources, such as JMS, VM or JDBC resources, into a single, very reliable, global transaction. The XA (eXtended Architecture) standard is an X/Open group standard which specifies the interface between a global transaction manager and local transactional resource managers. The XA protocol defines a 2-phase commit protocol which can be used to more reliably coordinate and sequence a series of "all or nothing" operations across multiple servers, even servers of different types Use JMS ack if C Acknowledgment should occur eventually, perhaps asynchronously C The performance of the message receipt is paramount C The message processing is idempotent C For the choreography portion of the SAGA pattern Use JMS transactions C For all other times in the integration you want to perform an atomic unit of work C When the unit of work comprises more than the receipt of a single message C To simply and unify the programming model (begin/commit/rollback) 18.What operation can be performed through a JMX agent enabled in a Mule application? A. View object store entries ly B. Replay an unsuccessful message th oo C. Set a particular tog4J2 log level to TRACE m S D. Deploy a Mule application m xa Answer: C E 1 Explanation: el ev JMX Management Java Management Extensions (JMX) is a simple and standard way -L to manage applications, devices, services, and other resources. JMX is dynamic, so IA C M you can use it to monitor and manage resources as they are created, installed, and s as implemented. You can also use JMX to monitor and manage the Java Virtual Machine -P (JVM). Each resource is instrumented by one or more Managed Beans, or MBeans. ns io All MBeans are registered in an MBean Server. The JMX server agent consists of an st MBean Server and a set of services for handling Mbeans. There are several agents ue Q provided with Mule for JMX support. The easiest way to configure JMX is to use the F D P default JMX support agent. Log4J Agent The log4j agent exposes the configuration of 1 el the Log4J instance used by Mule for JMX management. You enable the Log4J agent ev using the <jmx-log4j> element. It does not take any additional properties MuleSoft -L IA Reference: https://docs.mulesoft.com/mule-runtime/3.9/jmx-management C M t of eS ul 19.What condition requires using a CloudHub Dedicated Load Balancer? M A. When cross-region load balancing is required between separate deployments of the same Mule application B. When custom DNS names are required for API implementations deployed to customer-hosted Mule runtimes C. When API invocations across multiple CloudHub workers must be load balanced D. When server-side load-balanced TLS mutual authentication is required between API implementations and API clients Answer: D Explanation: Correct answer is When server-side load-balanced TLS mutual authentication is required between API implementations and API clients CloudHub dedicated load balancers (DLBs) are an optional component of Anypoint Platform that enable you to route external HTTP and HTTPS traffic to multiple Mule applications deployed to CloudHub workers in a Virtual Private Cloud (VPC). Dedicated load balancers enable you to: * Handle load balancing among the different CloudHub workers that run your application. * Define SSL configurations to provide custom certificates and optionally enforce two- way SSL client authentication. * Configure proxy rules that map your applications to custom domains. This enables you to host your applications under a single domain ly 20.An integration Mute application is being designed to process orders by submitting th oo them to a backend system for offline processing. Each order will be received by the m S Mute application through an HTTPS POST and must be acknowledged immediately. m xa Once acknowledged, the order will be submitted to a backend system. Orders that E 1 cannot be successfully submitted due to rejections from the backend system will need el ev to be processed manually (outside the backend system). -L The Mule application will be deployed to a customer-hosted runtime and is able to use IA C M an existing ActiveMQ broker if needed. s as The backend system has a track record of unreliability both due to minor network -P connectivity issues and longer outages. ns io What idiomatic (used for their intended purposes) combination of Mule application st components and ActiveMQ queues are required to ensure automatic submission of ue Q orders to the backend system, while minimizing manual order processing? F D P A. An On Error scope Non-persistent VM ActiveMQ Dead Letter Queue for manual 1 el processing ev B. An On Error scope MuleSoft Object Store ActiveMQ Dead Letter Queue for manual -L IA processing C M C. Until Successful component MuleSoft Object Store ActiveMQ is NOT needed or t of eS used ul D. Until Successful component ActiveMQ long retry Queue ActiveMQ Dead Letter M Queue for manual processing Answer: D Explanation: Correct answer is using below set of activities Until Successful component ActiveMQ long retry Queue ActiveMQ Dead Letter Queue for manual processing We will see why this is correct answer but before that lets understand few of the concepts which we need to know. Until Successful Scope The Until Successful scope processes messages through its processors until the entire operation succeeds. Until Successful repeatedly retries to process a message that is attempting to complete an activity such as: - Dispatching to outbound endpoints, for example, when calling a remote web service that may have availability issues. - Executing a component method, for example, when executing on a Spring bean that may depend on unreliable resources. - A sub-flow execution, to keep re-executing several actions until they all succeed, - Any other message processor execution, to allow more complex scenarios. How this will help requirement: Using Until Successful Scope we can retry sending the order to backend systems in case of error to avoid manual processing later. Retry values can be configured in Until Successful Scope Apache ActiveMQ It is an open source message broker written in Java together with a full Java Message Service client ActiveMQ has the ability to deliver messages with delays thanks to its scheduler. This functionality is the base for the broker redelivery plug-in. The redelivery plug-in can intercept dead letter processing and reschedule the failing ly messages for redelivery. Rather than being delivered to a DLQ, a failing message is th oo scheduled to go to the tail of the original queue and redelivered to a message m S consumer. m xa How this will help requirement: If backend application is down for a longer duration E 1 where Until Successful Scope wont work, then we can make use of ActiveMQ long el ev retry Queue. The redelivery plug-in can intercept dead letter processing and -L reschedule the failing messages for redelivery. Mule IA C M Reference: https://docs.mulesoft.com/mule-runtime/4.3/migration-core-until- s as successful -P ns io st 21.Mule applications need to be deployed to CloudHub so they can access on- ue Q premises database systems. These systems store sensitive and hence tightly F D P protected data, so are not accessible over the internet. 1 el What network architecture supports this requirement? ev A. An Anypoint VPC connected to the on-premises network using an IPsec tunnel or -L IA AWS DirectConnect, plus matching firewall rules in the VPC and on-premises network C M B. Static IP addresses for the Mule applications deployed to the CloudHub Shared t of eS Worker ul Cloud, plus matching firewall rules and IP M whitelisting in the on-premises network C. An Anypoint VPC with one Dedicated Load Balancer fronting each on-premises database system, plus matching IP whitelisting in the load balancer and firewall rules in the VPC and on-premises network D. Relocation of the database systems to a DMZ in the on-premises network, with Mule applications deployed to the CloudHub Shared Worker Cloud connecting only to the DMZ Answer: A Explanation: Correct answer is An Anypoint VPC connected to the on-premises network using an IPsec tunnel or AWS DirectConnect, plus matching firewall rules in the VPC and on- premises network IPsec Tunnel You can use an IPsec tunnel with network-to-network configuration to connect your on-premises data centers to your Anypoint VPC. An IPsec VPN tunnel is generally the recommended solution for VPC to on-premises connectivity, as it provides a standardized, secure way to connect. This method also integrates well with existing IT infrastructure such as routers and appliances. ly th oo m S m xa E 1 el ev -L IA C M s as -P Diagram, schematic ns Description automatically generated io st https://docs.mulesoft.com/runtime-manager/vpc-connectivity-methods-concept ue * "Relocation of the database systems to a DMZ in the on-premises network, with Q F D Mule applications deployed to the CloudHub Shared Worker Cloud connecting only to P 1 the DMZ" is not a feasible option el ev * "Static IP addresses for the Mule applications deployed to the CloudHub Shared -L IA Worker Cloud, plus matching firewall rules and IP whitelisting in the on-premises C M network" - It is risk for sensitive data. - Even if you whitelist the database IP on your t of app, your app went be able to connect to the database so this is also not a feasible eS ul option M * "An Anypoint VPC with one Dedicated Load Balancer fronting each on-premises database system, plus matching IP whitelisting in the load balancer and firewall rules in the VPC and on-premises network" Adding one VPC with a DLB for each backend system also makes no sense, is way too much work. Why would you add a LB for one system. * Correct answer: "An Anypoint VPC connected to the on-premises network using an IPsec tunnel or AWS DirectConnect, plus matching firewall rules in the VPC and on- premises network" IPsec Tunnel You can use an IPsec tunnel with network-to-network configuration to connect your on-premises data centers to your Anypoint VPC. An IPsec VPN tunnel is generally the recommended solution for VPC to on-premises connectivity, as it provides a standardized, secure way to connect. This method also integrates well with existing IT infrastructure such as routers and appliances. Reference: https://docs.mulesoft.com/runtime-manager/vpc-connectivity-methods- concept 22.A mule application is being designed to perform product orchestration. The Mule application needs to join together the responses from an inventory API and a Product Sales History API with the least latency. To minimize the overall latency. What is the most idiomatic (used for its intended purpose) design to call each API request in the Mule application? ly A. Call each API request in a separate lookup call from Dataweave reduce operator th oo B. Call each API request in a separate route of a Scatter-Gather m S C. Call each API request in a separate route of a Parallel For Each scope m xa D. Call each API request in a separate Async scope E 1 Answer: B el ev -L IA C M 23.An Order microservice and a Fulfillment microservice are being designed to s as communicate with their dients through message-based integration (and NOT through -P API invocations). ns io The Order microservice publishes an Order message (a kind of command message) st containing the details of an order to be fulfilled. The intention is that Order messages ue Q are only consumed by one Mute application, the Fulfillment microservice. F D P The Fulfilment microservice consumes Order messages, fulfills the order described 1 el therein, and then publishes an OrderFulfilted message (a kind of event message). ev Each OrderFulfilted message can be consumed by any interested Mule application, -L IA and the Order microservice is one such Mute application. C M What is the most appropriate choice of message broker(s) and message t of eS destination(s) in this scenario? ul A. Order messages are sent to an Anypoint MQ exchange OrderFulfilted messages M are sent to an Anypoint MQ queue Both microservices interact with Anypoint MQ as the message broker, which must therefore scale to support the toad of both microservices B. Older messages are sent directly to the Fulfillment microservices OrderFulfilled messages are sent directly to the Order microservice The Order microservice Interacts with one AMQP-compatible message broker and the Fulfillment microservice Interacts with a different AMQP-compatible message broker, so that both message brokers can be chosen and scaled to best support the toad each microservice C. Order messages are sent to a JMS queue OrderFulfilled messages are sent to a JMS topic Both microservices Interact with the same JMS provider (message broker) Instance, which must therefore scale to support the load of both microservices D. Order messages are sent to a JMS queue OrderFulfilled messages are sent to a JMS topic The Order microservice Interacts with one JMS provider (message broker) and the Fulfillment microservice interacts with a different JMS provider, so that both message brokers can be chosen and scaled to best support the load of each microservice Answer: D Explanation: * If you need to scale a JMS provider/ message broker, - add nodes to scale it horizontally or - add memory to scale it vertically * Cons of adding another JMS provider/ message broker: - adds cost. - adds complexity to use two JMS brokers - adds Operational overhead if we use two ly brokers, say, ActiveMQ and IBM MQ th oo * So Two options that mention to use two brokers are not best choice. m S * It's mentioned that "The Fulfillment microservice consumes Order messages, fulfills m xa the order described therein, and then publishes an OrderFulfilled message. Each E 1 OrderFulfilled message can be consumed by any interested Mule application." - When el ev you publish a message on a topic, it goes to all the subscribers who are interested - -L so zero to many subscribers will receive a copy of the message. - IA C M When you send a message on a queue, it will be received by exactly one consumer. s as * As we need multiple consumers to consume the message below option is not valid -P choice: "Order messages are sent to an Anypoint MQ exchange. OrderFulfilled ns io messages are sent to an Anypoint MQ queue. Both microservices interact with st Anypoint MQ as the message broker, which must therefore scale to support the load ue Q of both microservices" F D P * Order messages are only consumed by one Mule application, the Fulfillment 1 el microservice, so we will publish it on queue and OrderFulfilled message can be ev consumed by any interested Mule application so it need to be published on Topic -L IA using same broker. C M * Correct answer: Best choice in this scenario is: "Order messages are sent to a JMS t of eS queue. OrderFulfilled messages are sent to a JMS topic. Both microservices interact ul with the same JMS provider (message broker) instance, which must therefore scale to M support the load of both microservices" Tried to depict scenario in diagram: ly th oo m S m xa E 1 el ev -L IA C M s as -P ns io st ue Q F D P 1 el ev -L IA C M t of eS ul M Diagram Description automatically generated 24.An organization is creating a set of new services that are critical for their business. The project team prefers using REST for all services but is willing to use SOAP with common WS-" standards if a particular service requires it. What requirement would drive the team to use SOAP/WS-* for a particular service? A. Must use XML payloads for the service and ensure that it adheres to a specific schema B. Must publish and share the service specification (including data formats) with the consumers of the service C. Must support message acknowledgement and retry as part of the protocol D. Must secure the service, requiring all consumers to submit a valid SAML token Answer: C Explanation: Security Assertion Markup Language (SAML) is an open standard that allows identity providers (IdP) to pass authorization credentials to service providers (SP). SAML transactions use Extensible Markup Language (XML) for standardized ly communications between the identity provider and service providers. th oo SAML is the link between the authentication of a user’s identity and the authorization m S to use a service. m xa WS-Security is the key extension that supports many authentication models including: E 1 basic username/password credentials, SAML, OAuth and more. el ev A common way that SOAP API’s are authenticated is via SAML Single Sign On -L (SSO). SAML works by facilitating the exchange of authentication and authorization IA C M credentials across applications. However, there is no specification that describes how s as to add SAML to REST web services. -P Reference: https://www.oasis-open.org/committees/download.php/16768/wss- ns io v1.1-spec-os-SAMLTokenProfile.pdf st ue Q F D P 25.A Mule application is synchronizing customer data between two different database 1 el systems. ev What is the main benefit of using eXtended Architecture (XA) transactions over local -L IA transactions to synchronize these two different database systems? C M A. An XA transaction synchronizes the database systems with the least amount of t of eS Mule configuration or coding ul B. An XA transaction handles the largest number of requests in the shortest time M C. An XA transaction automatically rolls back operations against both database systems if any operation falls D. An XA transaction writes to both database systems as fast as possible Answer: B Explanation: Reference: https://docs.oracle.com/middleware/1213/wls/PERFM/llrtune.htm#PERFM997 26.An Integration Mule application is being designed to synchronize customer data between two systems. One system is an IBM Mainframe and the other system is a Salesforce Marketing Cloud (CRM) instance. Both systems have been deployed in their typical configurations, and are to be invoked using the native protocols provided by Salesforce and IBM. What interface technologies are the most straightforward and appropriate to use in this Mute application to interact with these systems, assuming that Anypoint Connectors exist that implement these interface technologies? A. IBM: DB access CRM: gRPC B. IBM: REST CRM:REST C. IBM: Active MQ CRM: REST D. IBM:QCS CRM: SOAP Answer: C Explanation: ly Correct answer is IBM: CICS CRM: SOAP th oo * Within Anypoint Exchange, MuleSoft offers the IBM CICS connector. Anypoint m S Connector for IBM CICS Transaction Gateway (IBM CTG Connector) provides m xa integration with back-end CICS apps using the CICS Transaction Gateway. E 1 * Anypoint Connector for Salesforce Marketing Cloud (Marketing Cloud Connector) el ev enables you to connect to the Marketing Cloud API web services (now known as the -L Marketing Cloud API), which is also known as the Salesforce Marketing Cloud. This IA C M connector exposes convenient operations via SOAP for exploiting the capabilities of s as Salesforce -P Marketing Cloud. ns io st ue Q 27.To implement predictive maintenance on its machinery equipment, ACME Tractors F D P has installed thousands of IoT sensors that will send data for each machinery asset 1 el as sequences of JMS messages, in near real-time, to a JMS queue named ev SENSOR_DATA on a JMS server. The Mule application contains a JMS Listener -L IA operation configured to receive incoming messages from the JMS servers C M SENSOR_DATA JMS queue. The Mule application persists each received JMS t of eS message, then sends a transformed version of the corresponding Mule event to the ul machinery equipment back-end systems. M The Mule application will be deployed to a multi-node, customer-hosted Mule runtime cluster. Under normal conditions, each JMS message should be processed exactly once. How should the JMS Listener be configured to maximize performance and concurrent message processing of the JMS queue? A. Set numberOfConsumers = 1 Set primaryNodeOnly = false B. Set numberOfConsumers = 1 Set primaryNodeOnly = true C. Set numberOfConsumers to a value greater than one Set primaryNodeOnly = true D. Set numberOfConsumers to a value greater than one Set primaryNodeOnly = false Answer: D Explanation: Reference: https://docs.mulesoft.com/jms-connector/1.8/jms-performance 28.An external REST client periodically sends an array of records in a single POST request to a Mule application API endpoint. The Mule application must validate each record of the request against a JSON schema before sending it to a downstream system in the same order that it was received in the array ly Record processing will take place inside a router or scope that calls a child flow. The th oo child flow has its own error handling defined. Any validation or communication failures m S should not prevent further processing of the remaining records. m xa To best address these requirements what is the most idiomatic(used for it intended E 1 purpose) router or scope to used in the parent flow, and what type of error handler el ev should be used in the child flow? -L A. First Successful router in the parent flow IA C M On Error Continue error handler in the child flow s as B. For Each scope in the parent flow -P On Error Continue error handler in the child flow ns io C. Parallel For Each scope in the parent flow st On Error Propagate error handler in the child flow ue Q D. Until Successful router in the parent flow F D P On Error Propagate error handler in the child flow 1 el Answer: B ev Explanation: -L IA Correct answer is For Each scope in the parent flow On Error Continue error handler C M in the child flow. You can extract below set of requirements from the question a) t of eS Records should be sent to downstream system in the same order that it was received ul in the array b) Any validation or communication failures should not prevent further M processing of the remaining records First requirement can be met using For Each scope in the parent flow and second requirement can be met using On Error Continue scope in child flow so that error will be suppressed. 29.The ABC company has an Anypoint Runtime Fabric on VMs/Bare Metal (RTF-VM) appliance installed on its own customer-hosted AWS infrastructure. Mule applications are deployed to this RTF-VM appliance. As part of the company standards, the Mule application logs must be forwarded to an external log management tool (LMT). Given the company's current setup and requirements, what is the most idiomatic (used for its intended purpose) way to send Mule application logs to the external LMT? A. In RTF-VM, install and configure the external LTM's log-forwarding agent B. In RTF-VM, edit the pod configuration to automatically install and configure an Anypoint Monitoring agent C. In each Mule application, configure custom Log4j settings D. In RTF-VM. configure the out-of-the-box external log forwarder Answer: A Explanation: Reference: https://help.mulesoft.com/s/article/Enable-external-log-forwarding-for-Mule- applications-deployed-in-RTF ly th oo 30.Anypoint Exchange is required to maintain the source code of some of the assets m S committed to it, such as Connectors, Templates, and API specifications. m xa What is the best way to use an organization's source-code management (SCM) E 1 system in this context? el ev A. Organizations should continue to use an SCM system of their choice, in addition to -L keeping source code for these asset types in Anypoint Exchange, thereby enabling IA C M parallel development, branching, and merging s as B. Organizations need to use Anypoint Exchange as the main SCM system to -P centralize versioning and avoid code duplication ns io C. Organizations can continue to use an SCM system of their choice for branching st and merging, as long as they follow the branching and merging strategy enforced by ue Q Anypoint Exchange F D P D. Organizations need to point Anypoint Exchange to their SCM system so Anypoint 1 el Exchange can pull source code when requested by developers and provide it to ev Anypoint Studio -L IA Answer: B C M Explanation: t of eS * Organization should continue to use SCM system of their choice, in addition to ul keeping source code for these asset types in Anypoint Exchange, thereby enabling M parallel development, branching. * Reason is that Anypoint exchange is not full fledged version repositories like GitHub. * But at same time it is tightly coupled with Mule assets 31.An API client is implemented as a Mule application that includes an HTTP Request operation using a default configuration. The HTTP Request operation invokes an external API that follows standard HTTP status code conventions, which causes the HTTP Request operation to return a 4xx status code. What is a possible cause of this status code response? A. An error occurred inside the external API implementation when processing the HTTP request that was received from the outbound HTTP Request operation of the Mule application B. The external API reported that the API implementation has moved to a different external endpoint C. The HTTP response cannot be interpreted by the HTTP Request operation of the Mule application after it was received from the external API D. The external API reported an error with the HTTP request that was received from the outbound HTTP Request operation of the Mule application Answer: D Explanation: Correct choice is: "The external API reported an error with the HTTP request that was received from the outbound HTTP Request operation of the Mule application" ly Understanding HTTP 4XX Client Error Response Codes: A 4XX Error is an error that th oo arises in cases where there is a problem with the user’s request, and not with the m S server. Such cases usually arise when a user’s access to a webpage is restricted, the m xa user misspells the URL, or when a webpage is nonexistent or removed from the E 1 public’s view. In short, it is an error that occurs because of a mismatch between what el ev a user is trying to access, and its availability to the user ― either because the user -L does not have the right to access it, or because what the user is trying to access IA C M simply does not exist. Some of the examples of 4XX errors are 400 Bad Request The s as server could not understand the request due to invalid syntax. 401 Unauthorized -P Although the HTTP standard specifies "unauthorized", semantically this response ns io means "unauthenticated". That is, the client must authenticate itself to get the st requested response. 403 Forbidden The client does not have access rights to the ue Q content; that is, it is unauthorized, so the server is refusing to give the requested F D P resource. Unlike 401, the client's identity is known to the server. 404 Not Found The 1 el server can not find the requested resource. In the browser, this means the URL is not ev recognized. In an API, this can also mean that the endpoint is valid but the resource -L IA itself does not exist. Servers may also send this response instead of 403 to hide the C M existence of a resource from an unauthorized client. This response code is probably t of eS the most famous one due to its frequent occurrence on the web. 405 Method Not ul Allowed The request method is known by the server but has been disabled and M cannot be used. For example, an API may forbid DELETE-ing a resource. The two mandatory methods, GET and HEAD, must never be disabled and should not return this error code. 406 Not Acceptable This response is sent when the web server, after performing server-driven content negotiation, doesn't find any content that conforms to the criteria given by the user agent. The external API reported that the API implementation has moved to a different external endpoint cannot be the correct answer as in this situation 301 Moved Permanently The URL of the requested resource has been changed permanently. The new URL is given in the response. ----------------------------------------------------------------------------------------------------------------- In Lay man's term the scenario would be: API CLIENT ―> MuleSoft API - HTTP request “Hey, API.. process this” ―> External API API CLIENT <C MuleSoft API - http response "I'm sorry Client.. something is wrong with that request"<C (4XX) External API 32.What is required before an API implemented using the components of Anypoint Platform can be managed and governed (by applying API policies) on Anypoint Platform? A. The API must be published to Anypoint Exchange and a corresponding API instance ID must be obtained from API Manager to be used in the API implementation B. The API implementation source code must be committed to a source control management system (such as GitHub) C. A RAML definition of the API must be created in API designer so it can then be ly published to Anypoint Exchange th oo D. The API must be shared with the potential developers through an API portal so API m S consumers can interact with the API m xa Answer: C E 1 Explanation: el ev Context of the question is about managing and governing mule applications deployed -L on Anypoint platform. IA C M Anypoint API Manager (API Manager) is a component of Anypoint Platform that s as enables you to manage, govern, and secure APIs. It leverages the runtime -P capabilities of API Gateway and Anypoint Service Mesh, both of which enforce ns io policies, collect and track st analytics data, manage proxies, provide encryption and authentication, and manage ue Q applications. F D P Prerequisite of managing an API is that the API must be published to Anypoint 1 el Exchange. Hence the correct option in C ev Mule Ref Doc: https://docs.mulesoft.com/api-manager/2.x/getting-started-proxy -L IA Reference: https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new- C M concept t of eS ul M 33.A marketing organization is designing a Mule application to process campaign data. The Mule application will periodically check for a file in a SFTP location and process the records in the file. The size of the file can vary from 10MB to 5GB. Due to the limited availabiltty of vCores, the Mule application is deployed to a single CloudHub worker configured with vCore size 0.2. The application must transform and send different formats of this file to three different downstream SFTP locations. What is the most idiomatic (used for its intended purpose) and performant way to configure the SFTP operations or event sources to process the large files to support these deployment requirements? A. Use an in-memory repeatable stream B. Use a file-stored non-repeatable stream C. Use an in-memory non-repeatable stream D. Use a file-stored repeatable stream Answer: A Explanation: Reference: https://docs.mulesoft.com/mule-runtime/4.4/streaming-about 34.Refer to the exhibit. ly th oo m S m xa E 1 el ev -L IA C M s as -P ns io st ue Q F D P 1 el ev -L IA A Mule application is deployed to a cluster of two customer-hosted Mute runtimes. C M The Mute application has a flow that polls a database and another flow with an HTTP t of eS Listener. ul HTTP clients send HTTP requests directly to individual cluster nodes. M What happens to database polling and HTTP request handling in the time after the primary (master) node of the cluster has railed, but before that node is restarted? A. Database polling continues Only HTTP requests sent to the remaining node continue to be accepted B. Database polling stops All HTTP requests continue to be accepted C. Database polling continues All HTTP requests continue to be accepted, but requests to the failed node Incur increased latency D. Database polling stops All HTTP requests are rejected Answer: A Explanation: Correct answer is Database polling continues Only HTTP requests sent to the remaining node continue to be accepted. Explanation: Architecture descripted in the question could be described as follows. When node 1 is down, DB polling will still continue via node 2. Also requests which are coming directly to node 2 will also be accepted and processed in BAU fashion. Only thing that wont work is when requests are sent to Node 1 HTTP connector. The flaw with this architecture is HTTP clients are sending HTTP requests directly to individual cluster nodes. By default, clustering Mule runtime engines ensures high system availability. If a Mule runtime engine node becomes unavailable due to failure or planned downtime, another node in the cluster can assume the workload and continue to process existing events and messages ly th oo m S m xa E 1 el ev -L IA C M s as -P ns io st ue Q F D P 1 el ev -L IA C M Diagram t of eS Description automatically generated ul M 35.A mule application is deployed to a Single Cloudhub worker and the public URL appears in Runtime Manager as the APP URL. Requests are sent by external web clients over the public internet to the mule application App url. Each of these requests routed to the HTTPS Listener event source of the running Mule application. Later, the DevOps team edits some properties of this running Mule application in Runtime Manager. Immediately after the new property values are applied in runtime manager, how is the current Mule application deployment affected and how will future web client requests to the Mule application be handled? A. Cloudhub will redeploy the Mule application to the OLD Cloudhub worker New web client requests will RETURN AN ERROR until the Mule application is redeployed to the OLD Cloudhub worker B. CloudHub will redeploy the Mule application to a NEW Cloudhub worker New web client requests will RETURN AN ERROR until the NEW Cloudhub worker is available C. Cloudhub will redeploy the Mule application to a NEW Cloudhub worker New web client requests are ROUTED to the OLD Cloudhub worker until the NEW Cloudhub worker is available. D. Cloudhub will redeploy the mule application to the OLD Cloudhub worker New web client requests are ROUTED to the OLD Cloudhub worker BOTH before and after the Mule application is redeployed. Answer: C ly Explanation: th oo CloudHub supports updating your applications at runtime so end users of your HTTP m S APIs experience zero downtime. While your application update is deploying, m xa CloudHub keeps the old version of your application running. Your domain points to E 1 the old version of your application until the newly uploaded version is fully started. el ev This allows you to keep servicing requests from your old application while the new -L version of your application is starting. IA C M s as -P 36.An organization's security policies mandate complete control of the login ns io credentials used to log in to Anypoint Platform. st What feature of Anypoint Platform should be used to meet this requirement? ue Q A. Enterprise Security Module F D P B. Client ID Secret 1 el C. Federated Identity Management ev D. Federated Client Management -L IA Answer: C C M Explanation: t of eS Correct answer is Federated Identity Management As the Anypoint Platform ul organization administrator, you can configure identity management in Anypoint M Platform to set up users for single sign-on (SSO). Configure identity management using one of the following single sign-on standards: OpenID Connect: End user identity verification by an authorization server including SSO SAML 2.0: Web-based authorization including cross-domain SSO Where as Client Management is where Anypoint Platform acts as a client provider by default, but you can also configure external client providers to authorize client applications. As an API owner, you can apply an OAuth 2.0 policy to authorize client applications that try to access your API. You need an OAuth 2.0 provider to use an OAuth 2.0 policy Reference: https://help.mulesoft.com/s/article/How-federated-users-are-mapped-to-Anypoint- Platform-Business-Groups-when-External-Identity-is-enabled https://docs.mulesoft.com/access-management/external-identity 37.What API policy would LEAST likely be applied to a Process API? A. Custom circuit breaker B. Client ID enforcement C. Rate limiting D. JSON threat protection Answer: D Explanation: Key to this question lies in the fact that Process API are not meant to be accessed directly by clients. Lets analyze options one by one. Client ID enforcement: This is ly applied at process API level generally to ensure that identity of API clients is always th oo known and available for API-based analytics Rate Limiting: This policy is applied on m S Process Level API to secure API's against degradation of service that can happen in m xa case load received is more than it can handle Custom circuit breaker: This is also E 1 quite useful feature on process level API's as it saves the API client the wasted time el ev and effort of invoking a failing API. JSON threat protection: This policy is not required -L at Process API and rather implemented as Experience API's. This policy is used to IA C M safeguard application from malicious attacks by injecting malicious code in JSON s as object. As ideally Process API's are never called from external world, this policy is -P never used on Process API's Hence correct answer is JSON threat protection ns io MuleSoft Documentation st Reference: https://docs.mulesoft.com/api-manager/2.x/policy-mule3-json-threat ue Q F D P 1 el 38.An organization is implementing a Quote of the Day API that caches today's quote. ev What scenario can use the CloudHub Object Store connector to persist the cache's -L IA state? C M A. When there is one deployment of the API implementation to CloudHub and another t of eS one to customer hosted mule runtime that must share the cache state. ul B. When there are two CloudHub deployments of the API implementation by two M Anypoint Platform business groups to the same CloudHub region that must share the cache state. C. When there is one CloudHub deployment of the API implementation to three workers that must share the cache state. D. When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state. Answer: C Explanation: Object Store Connector is a Mule component that allows for simple key-value storage. Although it can serve a wide variety of use cases, it is mainly design for: - Storing synchronization information, such as watermarks. - Storing temporal information such as access tokens. - Storing user information. Additionally, Mule Runtime uses Object Stores to support some of its own components, for example: - The Cache module uses an Object Store to maintain all of the cached data. - The OAuth module (and every OAuth enabled connector) uses Object Stores to store the access and refresh tokens. Object Store data is in the same region as the worker where the app is initially deployed. For example, if you deploy to the Singapore region, the object store persists in the Singapore region. MuleSoft Reference: https://docs.mulesoft.com/object-store-connector/1.1/ Data can be shared between different instances of the Mule application. This is not recommended for Inter Mule app communication. Coming to the question, object store cannot be used to ly share cached data if it is deployed as separate Mule applications or deployed under th oo separate Business Groups. Hence correct answer is When there is one CloudHub m S deployment of the API implementation to three workers that must share the cache m xa state. E 1 el ev -L 39.An API has been unit tested and is ready for integration testing. The API is IA C M governed by a Client ID Enforcement policy in all environments. s as What must the testing team do before they can start integration testing the API in the -P Staging environment? ns io A. They must access the API portal and create an API notebook using the Client ID st and Client Secret supplied by the API portal in the Staging environment ue Q B. They must request access to the API instance in the Staging environment and F D P obtain a Client ID and Client Secret to be used for testing the API 1 el C. They must be assigned as an API version owner of the API in the Staging ev environment -L IA D. They must request access to the Staging environment and obtain the Client ID and C M Client Secret for that environment to be used for testing the API t of eS Answer: B ul Explanation: M * It's mentioned that the API is governed by a Client ID Enforcement policy in all environments. * Client ID Enforcement policy allows only authorized applications to access the deployed API implementation. * Each authorized application is configured with credentials: client_id and client_secret. * At runtime, authorized applications provide the credentials with each request to the API implementation. MuleSoft Reference: https://docs.mulesoft.com/api-manager/2.x/policy-mule3-client-id-based- policies 40.Organization wants to achieve high availability goal for Mule applications in customer hosted runtime plane. Due to the complexity involved, data cannot be shared among of different instances of same Mule application. What option best suits to this requirement considering high availability is very much critical to the organization? A. The cluster can be configured B. Use third party product to implement load balancer C. High availability can be achieved only in CloudHub D. Use persistent object store Answer: B ly Explanation: th oo High availability is about up-time of your application m S A) High availability can be achieved only in CloudHub isn't correct statement. It can m xa be achieved in customer hosted runtime planes as well E 1 B) An object store is a facility for storing objects in or across Mule applications. Mule el ev runtime engine (Mule) uses object stores to persist data for eventual retrieval. It can -L be used for disaster recovery but not for High Availability. Using object store can't IA C M guarantee that all instances won't go down at once. So not an appropriate choice. s as Reference: https://docs.mulesoft.com/mule-runtime/4.3/mule-object-stores -P C) High availability can be achieved by below two models for on-premise MuleSoft ns io implementations. st 1) Mule Clustering C Where multiple Mule servers are available within the same ue Q cluster environment and the routing of requests will be done by the load balancer. A F D P cluster is a set of up to eight servers that act as a single deployment target and high- 1 el availability processing unit. Application instances in a cluster are aware of each other, ev share common information, and synchronize statuses. If one server fails, another -L IA server takes over processing applications. A cluster can run multiple applications. C M (refer left half of the diagram) t of eS In given scenario, it's mentioned that 'data cannot be shared among of different ul instances'. M So this is not a correct choice. Reference: https://docs.mulesoft.com/runtime-manager/cluster-about 2) Load balanced standalone Mule instances C The high availability can be achieved even without cluster, with the usage of third party load balancer pointing requests to different Mule servers. This approach does not share or synchronize data between Mule runtimes. Also high availability achieved as load balanced algorithms can be implemented using external load balancer. (refer right half of the diagram) Graphical user interface, diagram, application Description automatically generated 41.A Mule application is deployed to a cluster of two(2) cusomter-hosted Mule runtimes. Currently the node name Alice is the primary node and node named bob is the secondary node. The mule application has a flow that polls a directory on a file system for new files. The primary node Alice fails for an hour and then restarted. After the Alice node completely restarts, from what node are the files polled, and what node is now the primary node for the cluster? A. Files are polled from Alice node Alice is now the primary node B. Files are polled form Bob node Alice is now the primary node C. Files are polled from Alice node ly Bob is the now the primary node th oo D. Files are polled form Bob node m S Bob is now the primary node m xa Answer: D E 1 Explanation: el ev * Mule High Availability Clustering provides basic failover capability for Mule. -L * When the primary Mule Runtime becomes unavailable, for example, because of a IA C M fatal JVM or hardware failure or it’s taken offline for maintenance, a backup Mule s as Runtime immediately becomes the primary node and resumes processing where the -P failed instance left off. ns io * After a system administrator recovers a failed Mule Runtime server and puts it back st online, that server automatically becomes the backup node. In this case, Alice, once ue Q up, will become backup F D P Reference: https://docs.mulesoft.com/mule-runtime/4.3/hadr-guide So correct choice 1 el is: Files are polled form Bob node Bob is now the primary node ev -L IA C M 42.An organization has various integrations implemented as Mule applications. Some t of eS of these Mule applications are deployed to custom hosted Mule runtimes (on- ul premises) while others execute in the MuleSoft-hosted runtime plane (CloudHub). To M perform the Integra functionality, these Mule applications connect to various backend systems, with multiple applications typically needing to access the backend systems. How can the organization most effectively avoid creating duplicates in each Mule application of the credentials required to access the backend systems? A. Create a Mule domain project that maintains the credentials as Mule domain- shared resources Deploy the Mule applications to the Mule domain, so the credentials are available to the Mule applications B. Store the credentials in properties files in a shared folder within the organization's data center Have the Mule applications load properties files from this shared location at startup C. Segregate the credentials for each backend system into environment-specific properties files Package these properties files in each Mule application, from where they are loaded at startup D. Configure or create a credentials service that returns the credentials for each backend system, and that is accessible from customer-hosted and MuleSoft-hosted Mule runtimes Have the Mule applications toad the properties at startup by invoking that credentials service Answer: C Explanation: * "Create a Mule domain project that maintains the credentials as Mule domain- shared resources" is wrong as domain project is not supported in Cloudhub * We should Avoid Creating duplicates in each Mule application but below two options cause duplication of credentials - Store the credentials in properties files in a shared ly folder within the organization’s data center. Have the Mule applications load th oo properties files from this shared location at startup - Segregate the credentials for m S each backend system into environment-specific properties files. Package these m xa properties files in each Mule application, from where they are loaded at startup So E 1 these are also wrong choices el ev * Credentials service is the best approach in this scenario. Mule domain projects are -L not supported on CloudHub. IA C M Also its is not recommended to have multiple copies of configuration values as this s as makes difficult to maintain Use the Mule Credentials Vault to encrypt data in a. -P properties file. (In the context of this document, we refer to the. properties file simply ns io as the properties file.) st The properties file in Mule stores data as key-value pairs which may contain ue Q information such as usernames, first and last names, and credit card numbers. A F D P Mule application may access this data as it processes messages, for example, to 1 el acquire login credentials for an external Web service. However, though this sensitive, ev private data must be stored in a properties file for Mule to access, it must also be -L IA protected against unauthorized C and potentially malicious C use by anyone with C M access to the Mule application t of eS ul M 43.An organization uses one specific CloudHub (AWS) region for all CloudHub deployments. How are CloudHub workers assigned to availability zones (AZs) when the organization's Mule applications are deployed to CloudHub in that region? A. Workers belonging to a given environment are assigned to the same AZ within that region. B. AZs are selected as part of the Mule application's deployment configuration. C. Workers are randomly distributed across available AZs within that region. D. An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ Answer: C Explanation: Correct answer is Workers are randomly distributed across available AZs within that region. This ensure high availability for deployed mule applications Mulesoft documentation reference: https://docs.mulesoft.com/runtime-manager/cloudhub-hadr 44.An Organization has previously provisioned its own AWS VPC hosting various servers. The organization now needs to use Cloudhub to host a Mule application that will implement a REST API once deployed to Cloudhub, this Mule application must be able to communicate securely with the customer-provisioned AWS VPC resources within the same region, without being interceptable on the public internet. ly What Anypoint Platform features should be used to meet these network th oo communication requirements between Cloudhub and the existing customer- m S provisioned AWS VPC? m xa A. Add a Mulesoft hosted Anypoint VPC configured and with VPC Peering to the AWS E 1 VPC el ev B. Configure an external identity provider (IDP) in Anypoint Platform with certificates -L from the customer provisioned AWS VPC IA C M C. Add a default API Whitelisting policy to API Manager to automatically whitelist the s as customer provisioned AWS VPC IP ranges needed by the Mule applicaton -P D. Use VM queues in the Mule application to allow any non-mule assets within the ns io customer provisioned AWS VPC to subscribed to and receive messages st Answer: A ue Q Explanation: F D P Correct answer is: Add a Mulesoft hosted Anypoint VPC configured and with VPC 1 el Peering to the AWS VPC ev * Connecting to your Anypoint VPC extends your corporate network and allows -L IA CloudHub workers to access resources behind your corporate firewall. C M * You can connect on-premises data centers through a secured VPN tunnel, or a t of eS private AWS VPC through VPC peering, or by using AWS Direct Connect. ul MuleSoft Doc Reference: https://docs.mulesoft.com/runtime-manager/virtual-private- M cloud 45.A Mule application is being designed To receive nightly a CSV file containing millions of records from an external vendor over SFTP, The records from the file need to be validated, transformed. And then written to a database. Records can be inserted into the database in any order. In this use case, what combination of Mule components provides the most effective and performant way to write these records to the database? A. Use a Parallel for Each scope to Insert records one by one into the database B. Use a Scatter-Gather to bulk insert records into the database C. Use a Batch job scope to bulk insert records into the database. D. Use a DataWeave map operation and an Async scope to insert records one by one into the database. Answer: C Explanation: Correct answer is Use a Batch job scope to bulk insert records into the database * Batch Job is most efficient way to manage millions of records. A few points to note here are as follows: Reliability: If you want reliabilty while processing the records, i.e should the processing survive a runtime crash or other unhappy scenarios, and when restarted process all the remaining records, if yes then go for batch as it uses persistent queues. ly Error Handling: In Parallel for each an error in a particular route will stop processing th oo the remaining records in that route and in such case you'd need to handle it using on m S error continue, batch process does not stop during such error instead you can have a m xa step for failures and have a dedicated handling in it. E 1 Memory footprint: Since question said that there are millions of records to process, el ev parallel for each will aggregate all the processed records at the end and can possibly -L cause Out Of Memory. IA C M Batch job instead provides a BatchResult in the on complete phase where you can s as get the count of failures and success. For huge file processing if order is not a -P concern definitely go ahead with Batch Job ns io st ue Q 46.A REST API is being designed to implement a Mule application. F D P What standard interface definition language can be used to define REST APIs? 1 el A. Web Service Definition Language (WSDL) ev B. OpenAPI Specification (OAS) -L IA C. YAML C M D. AsyncAPI Specification t of eS Answer: B ul M 47.An organization is evaluating using the CloudHub shared Load Balancer (SLB) vs creating a CloudHub dedicated load balancer (DLB). They are evaluating how this choice affects the various types of certificates used by CloudHub deplpoyed Mule applications, including MuleSoft-provided, customer-provided, or Mule application- provided certificates. What type of restrictions exist on the types of certificates that can be exposed by the CloudHub Shared Load Balancer (SLB) to external web clients over the public internet? A. Only MuleSoft-provided certificates are exposed. B. Only customer-provided wildcard certificates are exposed. C. Only customer-provided self-signed certificates are exposed. D. Only underlying Mule application certificates are exposed (pass-through) Answer: A Explanation: https://docs.mulesoft.com/runtime-manager/dedicated-load-balancer-tutorial 48.When designing an upstream API and its implementation, the development team has been advised to not set timeouts when invoking downstream API. Because the downstream API has no SLA that can be relied upon. This is the only donwstream API dependency of that upstream API. Assume the downstream API runs uninterrupted without crashing. ly What is the impact of this advice? th oo A. The invocation of the downstream API will run to completion without timing out. m S B. An SLA for the upstream API CANNOT be provided. m xa C. A default timeout of 500 ms will automatically be applied by the Mule runtime in E 1 which the upstream API implementation executes. el ev D. A load-dependent timeout of less than 1000 ms will be applied by the Mule runtime -L in which the downstream API implementation executes. IA C M Answer: B s as Explanation: -P An SLA for the upstream API CANNOT be provided. ns io st ue Q 49.An organization is creating a Mule application that will be deployed to CloudHub. F D P The Mule application has a property named dbPassword that stores a database 1 el user’s password. ev The organization's security standards indicate that the dbPassword property must be -L IA hidden from every Anypoint Platform user after the value is set in the Runtime C M Manager Properties tab. t of eS What configuration in the Mule application helps hide the dbPassword property value ul in Runtime Manager? M A. Use secure::dbPassword as the property placeholder name and store the cleartext (unencrypted) value in a secure properties placeholder file B. Use secure::dbPassword as the property placeholder name and store the property encrypted value in a secure properties placeholder file C. Add the dbPassword property to the secureProperties section of the pom.xml file D. Add the dbPassword property to the secureProperties section of the mule- artifact.json file Answer: B Explanation: Reference: https://docs.mulesoft.com/runtime-manager/secure-application-properties
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-