Oracle 1z0-531 Exam Practice Questions (P. 4)
- Full Access (69 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #16
The data block density for a particular BSO database is between 10% and 90%, and data values within the block do not consecutively repeat. Which type of compression would be most appropriate to use?
- ABitmap
- BRLE
- CZLIB
- DNo compression required
Correct Answer:
A
Bitmap is good for non-repeating data. It will use Bitmap or IVP (Index Value Pair).
Note: Bitmap compression, the default. Essbase stores only non-missing values and uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to store every non-missing cell. In most cases, bitmap compression conserves disk space more efficiently. However, much depends on the configuration of the data.
Incorrect answers:
RLE: You should change to RLE compression when the block density is < 3% (or if you have all the same values in the database - lots of zeros).
Note: RLE (Run Length Encoding) is a good compression type when your data has many zeros (block density low) or often repeats. RLE uses multiple compression compressions (one per block). RLE will use RLE, bitmap or IVP.
A
Bitmap is good for non-repeating data. It will use Bitmap or IVP (Index Value Pair).
Note: Bitmap compression, the default. Essbase stores only non-missing values and uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to store every non-missing cell. In most cases, bitmap compression conserves disk space more efficiently. However, much depends on the configuration of the data.
Incorrect answers:
RLE: You should change to RLE compression when the block density is < 3% (or if you have all the same values in the database - lots of zeros).
Note: RLE (Run Length Encoding) is a good compression type when your data has many zeros (block density low) or often repeats. RLE uses multiple compression compressions (one per block). RLE will use RLE, bitmap or IVP.
send
light_mode
delete
Question #17
You need to calculate average units sold by the customer dimension within an ASO database. The member formula should calculate correctly regardless of level within the customer dimension. Identify the correct syntax for the member formula.
- A@AVG (SKIPBOTH, "Units_Sold");
- BAvg(Customer.CurrentMember.Children, [Units_Sold])
- CAvg([Customer].Children, [Units_Sold]);
- DAvg(Customer.CurrentMember.Children, [Units_Sold]);
- EAvg(Customer.Children, [Units.Sold]);
Correct Answer:
B
A custom rollup technique, custom rollup formulas, lets the cube builder define an MDX formula for each dimension level. Analysis Services uses this formula to determine the value of the dimension level's members. For example, you could use an AVERAGE function rather than a summation to determine all members in one dimension level. If you use the AVERAGE function, the MDX formula for a dimension called Customers would be Avg( Customers.CurrentMember.Children ).
Note: The MultiDimensional eXpressions (MDX) language provides a specialized syntax for querying and manipulating the multidimensional data stored in
OLAP -
cubes
. While it is possible to translate some of these into traditional SQL, it would frequently require the synthesis of clumsy SQL expressions even for very simple
MDX expressions. MDX has been embraced by a wide majority of
OLAP vendors -
and has become the
standard
for OLAP systems.
B
A custom rollup technique, custom rollup formulas, lets the cube builder define an MDX formula for each dimension level. Analysis Services uses this formula to determine the value of the dimension level's members. For example, you could use an AVERAGE function rather than a summation to determine all members in one dimension level. If you use the AVERAGE function, the MDX formula for a dimension called Customers would be Avg( Customers.CurrentMember.Children ).
Note: The MultiDimensional eXpressions (MDX) language provides a specialized syntax for querying and manipulating the multidimensional data stored in
OLAP -
cubes
. While it is possible to translate some of these into traditional SQL, it would frequently require the synthesis of clumsy SQL expressions even for very simple
MDX expressions. MDX has been embraced by a wide majority of
OLAP vendors -
and has become the
standard
for OLAP systems.
send
light_mode
delete
Question #18
With an average block density greater than 90%.
- AYou should reconsider the dense and sparse settings
- BYou should consider no compression
- CYou should set Commit blocks to
- DYou should reconsider the outline order of dimensions
Correct Answer:
B
Hyperion recommends that you should turn compression off if you have a block density > 90% (rarely happens) and you should change to RLE compression when the block density is < 3% (or if you have all the same values in the database - lots of zeros).
Note: You may want to disable data compression if blocks have very high density (90% or greater) and have few consecutive, repeating data values. Under these conditions, enabling compression consumes resources unnecessarily. Don't use compression if disc space/memory is not an issue compared to your application.
It can become a drain on the processor.
B
Hyperion recommends that you should turn compression off if you have a block density > 90% (rarely happens) and you should change to RLE compression when the block density is < 3% (or if you have all the same values in the database - lots of zeros).
Note: You may want to disable data compression if blocks have very high density (90% or greater) and have few consecutive, repeating data values. Under these conditions, enabling compression consumes resources unnecessarily. Don't use compression if disc space/memory is not an issue compared to your application.
It can become a drain on the processor.
send
light_mode
delete
Question #19
You are performing incremental loads to an ASO database during the day, providing near real time data to the SaleDtl ASO database. Before the incremental load, you need to clear a specific set of data in the fastest amount of time possible.
What is the best solution?
What is the best solution?
- APartial clears are supported for ASO
- BPerform a logical clear, using MDX to specify the region to be cleared
- CPerform a physical clear, using MDX to specify the region to be cleared
- DRun a calc script containing the CLEARDATA command and a set of FIX statements that isolate the desired data set
- ERun a calc script containing the CLEARBLOCK command and a set of FIX statements that isolate the desired data set
Correct Answer:
AB
Within ASO partial clear are supported.
Logical clear is faster than physical clear.
AB
Within ASO partial clear are supported.
Logical clear is faster than physical clear.
send
light_mode
delete
Question #20
Identify the two true statements about materialization in ASO.
- AWhen performing an incremental data load, aggregate views are updated
- BThe database is not available during materialization
- CMaterialization can be tuned via query hints and hard restrictions defined at the database level
- DMaterialization scripts can be saved for future reuse
Correct Answer:
AD
The following process is recommended for defining and materializing aggregations:
* After the outline is created or changed, load data values.
* Perform the default aggregation. Do not select specify a storage stopping point option.
* Materialize the suggested aggregate views and save the default selection in an aggregation script.
* Run the types of queries the aggregation is being designed for.
* If query time or aggregation time is too long, consider fine-tuning the aggregation as stated below.
* Save the aggregation selection as an aggregation script (D)
AD
The following process is recommended for defining and materializing aggregations:
* After the outline is created or changed, load data values.
* Perform the default aggregation. Do not select specify a storage stopping point option.
* Materialize the suggested aggregate views and save the default selection in an aggregation script.
* Run the types of queries the aggregation is being designed for.
* If query time or aggregation time is too long, consider fine-tuning the aggregation as stated below.
* Save the aggregation selection as an aggregation script (D)
send
light_mode
delete
All Pages