|Exam Name||:||BusinessObjects Data Integrator XI -(R) Level Two|
|Questions and Answers||:||33 Q & A|
|Updated On||:||October 18, 2017|
|PDF Download Mirror||:||DMDI301 Brain Dump|
|Get Full Version||:||Pass4sure DMDI301 Full Version|
D. to_data(sales_date&' '& sales_time.'dd_mmmm-yyyy hh24:mi:ss')
You have a production job that retrieves data from your oracle 10g operational source system and loads the data into your data warehouse. The operations of the source system are complaining that the data integrator load process is causing the system to perform poorly. Which two actions can you take to reduce the impact data integrator jobs have on the source system? (Choose two)
Implement a CDC data store for the source system t reduce the number of rows extracted.
Increase the value of the "array _fetch_size" parameter on the source table.
Perform intensive operations such as "group by" and "joins" in a staging area instead of on the source system.
Use "linked data stores" to connect the source and target data stores.
Your data integrator environment interprets year values greater than 15 as 1915 instead of 2015. you must ensure data integrator interprets any date from "00 to 90" as "2000 to 2090" without making direct modifications to the underlying data flow. Which method should you use to accomplish this task?
Log into the designer and select tools l Options l data l General and modify the "Century change year" to 90.
Open the server manger and select edit job server config and modify the "Century change year to 90".
Open the web administration tool and select management l requisiteness edit theproduction requsitury and modify the "Century change year to 90".
On the job server, open the windows l control panel l regional settings l Customize data and modify the two digit year interpretation to 90.
Configure the source database to interpret the two digit dates appropriately.
You load over 10,000.000 records from the "customer" source table into a staging area. You need to remove the duplicate customer during the loading of the source table. You do not nee tot record or audit the duplicates. Which two do-duplicating techniques will ensure that best performance? (Choose two.)
Use a Query transform to order the incoming data set an use the previous_row-value in the where clause to filter any duplicate row.
Use the Query transform to order the incoming data set. Then a table_comparison transform with "input contains duplicates" and the "sorted input" options selected.
Use tha table_ comparison transform with the "input contains duplicates" and "cached comparison table" selected.
Use the lookup_ext function. With the Pre_load_cache" option selected to test each row for duplicates.
You want to join the "sales" and "customer" tables. Both tables reside in different data stores. The "sales" table contains approximately five million rows. The "customer" table Contains approximately five thousand rows, the join occurs in memory. How would you set the source table options to maximize the performance of the operation?
Set the sales table joins tank to 10 and the cache to "No" then set the customer table join tank to 5 and cache to "yes".
Set the sales table joins tank to 10 and the cache to "yes" then set the customer table join tank to 5 and cache to "yes".
Set the sales table joins tank to 5 and the cache to "Yes" then set the customer table join tank to 10 and cache to "No".
Set the sales table joins tank to 5 and the cache to "No" then set the customer table join tank to10 and cache to "No".
Where can the XML_Pipeline transform be used within a data flow? (Choose two)
Immediately after an XML source file.
Immediately after an XML source message.
Immediately after a Query containing nested data.
Immediately after an XML template.
You create a two stage process for transferring data from a source system to a target data warehouse via a staging area. The job you create runs both processes in an overnight schedule. The job fails at the point of transferring the data from the satging area to the target data warehouse. During the work day you want to return the job without impacting the source system and therefore want to just run the second stage of the process to ransfer the data from the staging area to the data warehouse. How would you design this job?
Create two data flows the first extracting the data from the source system the second transferring the data to the target data warehouse.
Create one data flow which extracts the data form the source system and uses a data_transfer transform to stage the data in the staging area before then continuing to transfer the data to the target data warehouse.
Create two data flows the first extracting the data from the source system and uses a data_tranfer transform to write the data to the staging area. The second data flow extracts the data from the staging area and transfers it to the target data warehouse.
Create one data flow which extracts from the source system and populates both the staging area and the target data warehouse.
Which two data integrator objects/operations support load balancing in a server Group based architecture? (Choose two.)
You have a data flow the read multiple XML files form a directory by specifying wildcard in the file name. which method can use to link the XML file name to the records being read?
Select "include file name column" in the XML source file.
Use the function get_xml file name in the query mapping
Use the column "XML_fileNAME" listed at the top of the XML file structure.
Use the variable$ current_XML_file in the query mapping
You are trying to improve the performance of a simple data flow that loads data from a source table into a staging area and only applies some simple remapping using a Query transform. The source database is located on the wan. The network administrator has told you that you can improve performance if you reduce the number or round trips that occur between the data integrator job server and the source database. What can you do in your data flow to achieve this?
increase the array reach size parameter in the source table editor
Increase the commit size in the target table editor.
Increase the commit size in the source table editor.
Replace the source table with the SQL transform.
AIRTN500 TIN Validation Failed 1094B-154, 1095B-169, 187, 2101094C-214, 223, 1095C-285, 317XML Schema Validation (form/checklist stage)AIRSH101AIRSH102XML Schema Validation Failed in ManifestXML Schema Validation Failed - now not well fashioned or lacking required elementsForm 1094-B and 1095-BBusiness rules blunders - Consistency tests kind 1094-BAIRBR400 Tax year is incorrect 1094B-149AIRBR401 Filer's TIN type is missing 1094B-160AIRBR408 Filer's TIN classification is inaccurate 1094B-151AIRBR402 Filer's EIN is invalid - consists of sequential numbers 1094B-152AIRBR403 Filer's EIN is invalid - has the entire same digits 1094B-153AIRBR404 Filer's contact mobilephone quantity is lacking 1094B-156-01AIRBR405 Filer's mailing handle is lacking 1094B-157AIRBR406 complete variety of forms 1095-B submitted with this transmittal is missing 1094B-158AIRBR407Total count of variety of varieties 1095-B submitted with this transmittal is incorrect1094B-159Business guidelines errors - Cardinality exams form 1094-BAIRBR500 Filer's identify is missing 1094B-150AIRBR502 Filer's identify of person to contact is lacking 1094B-155ErrorCodeBusiness guidelines error - Consistency assessments form 1095-BError DescriptionMapping to enterprise RulesAIR Composition & Reference book 73
AIRBR600 Tax year is missing 1095B-161AIRBR400 Tax year is incorrect 1095B-162AIRBR601 Tax yr mentioned on form 1095 doesn’t in shape the tax year said on kind 10941095B-163AIRBR602 responsible individual's TIN class is lacking 1095B-one hundred sixty five-01, 1095B-211AIRBR631 responsible particular person's TIN category is inaccurate 1095B-165-01AIRBR603 liable particular person's SSN and DOB are each missing when one is required 1095B-166AIRBR604 dependable particular person’s SSN is invalid - incorporates sequential numbers 1095B-167AIRBR605 in charge individual’s SSN is invalid - includes all the same digits 1095B-168AIRBR635 dependable particular person's SSN is invalid - contains alpha characters 1095B-221AIRBR636 responsible particular person's SSN is invalid - isn't 9 digits 1095B-220AIRBR606 dependable individual’s DOB is in the future 1095B-a hundred and seventy-01AIRBR607 accountable particular person’s DOB is earlier than Tax yr minus a hundred and twenty years 1095B-171-01AIRBR608 dependable individual’s mailing handle is lacking 1095B-172AIRBR609 beginning of the coverage is lacking 1095B-173AIRBR610 origin of the coverage is inaccurate 1095B-174AIRBR611 shop marketplace identification isn't missing 1095B-175AIRBR612 organisation's TIN category is missing 1095B-177AIRBR613 enterprise's TIN category is incorrect (isn't an EIN) 1095B-177AIRBR614 company's EIN is missing and starting place of coverage is code A 1095B-178AIRBR615 employer's EIN is invalid - consists of sequential numbers 1095B-179AIRBR616 business enterprise's EIN is invalid - consists of all the identical digits 1095B-180AIRBR637 employer's EIN is invalid - includes alpha characters 1095B-221AIRBR638 employer's EIN is invalid - isn't 9 digits 1095B-220AIRBR617 company's mailing tackle is lacking and beginning of coverage is code A 1095B-181AIRBR618 issuer's TIN category is missing 1095B-183, 1095B-213AIRBR619 issuer's TIN category is incorrect (isn't an EIN) 1095B-183ErrorCodeError DescriptionMapping to company RulesAIR Composition & Reference ebook seventy four