Archive for the ‘Exadata’ Category

All (that I know) about Exadata

April 25, 2013

Having associated with Exadata in one or other way from its release ( see: http://technofunctionalconsulting.blogspot.in/2008/09/exadata-database-processing-moves-into.html ) I have tried consolidating key points related to Exadata for a session to technical audience.

Related Posts: 

http://technofunctionalconsulting.blogspot.in/2009/10/exadata-v2-worlds-first-oltp-database.html

http://technofunctionalconsulting.blogspot.in/2010/02/hybrid-columnar-compression-hcc.html

http://technofunctionalconsulting.blogspot.in/2012/06/exadata-performance-features.html

Advertisements

Small & Big Data processing philosophies

January 3, 2013
In this first post of 2013, I would like to cover some fundamental philosophical aspects of “data” & “processing”.
As the buzz around “Big Data” going on high, I have classified the original structured, relational data as “small data” even though some very large databases I have seen having 100+ Terabytes of data with an IO volume of 150+ Terabytes per day.  
Present day data processing predominantly uses Von-Neumann architecture of computing in which “Data” and its “processing” are distinct and separated into “memory” and “processor” connected by a “bus”.  Any data that need to be processed will be moved into processor using the bus and then the required arithmetic or logical operation happens on it producing the “result” of the operation. Then the result will be moved to “memory/storage” for further reference.  Also, the list of operations to be performed (the processing logic or program) is stored in the “memory” as well. One needs to move the next instruction to be carried out into the processor from memory using the bus.
So in essence both the data and the operation that needs performing will be in memory which can’t process data and the facility that can process data is always dependent on the memory in the Von-Neumann architecture.
Traditionally, the “data” has been moved into a place where the processing logic is deployed as the amount of data is small when compared to the amount of processing needed is relatively large involving the complex logic. In the RDBMS engines like Oracle read the blocks of storage into the SGA buffer cache of running database instance for processing. The transactions were modifying small amounts of data at any given time.
Over a period of time “analytical processing” that required to bring huge amounts of data from storage into processing node which created a bottleneck on the network pipe. Add to that there is a large growth in the semi-structured and unstructured data that started flowing which needed a different philosophy towards data processing.
There comes the HDFS and map-reduce framework of Hadoop which took the processing to the data. During the same time comes Oracle Exadata which took the database processing to storage layer with a feature called “query offloading”
In the new paradigm, the snippets of processing logic are being taken to a cluster of connected nodes where the data mapped with a hashing algorithm resides and results of processing then reduced to finally produce result sets. It is now becoming economical to take the processing to data as the amount of data is relatively large and the required processing is fairly simple tasks of matching, aggregating, indexing etc.,
So, we now have choice of taking small amounts of data to complex processing with structured RDBMS engines with shared-everything architecture of traditional model as well as taking processing to data in the shared-nothing big data architectures. It purely depends on the type of “data processing” problem in hand and neither traditional RDBMS technologies will be replaced by new big data architectures, nor could the new big-data problems be solved by traditional RDBMS technologies. They go hand-in-hand complementing each other while adding value to the business when properly implemented.
The non-Von-Neumann architectures still need better attention by the technologists which will probably hold the key to the way human brain processes and deals with the information seamlessly either it is structured or non-structured streams of voice, video etc., with ease. 
Any non-Von-Neumann architecture enthusiasts over here?

Multi-Tenancy and Resource Management

August 8, 2012

As my association turns 24years today with the computer software; most of the past year I have spent on Oracle Exadata – An appliance (with hardware + software bundle of specified configuration as 1/4 rack, 1/2 rack or a full rack)

In the pre-appliance world, the underlying deployment architecture of server, network, storage would be built as per the application requirements and the quality attributes are “portability”, “scalability” and so on….

An application would be sized for the capacity of required CPU, Memory, I/O, storage along with its local fail-over requirements and the disaster recovery requirements and the underlying infrastructure was built using either physical or virtual components of server(s) and storage. The number of nodes in the cluster and size of each node would be carefully planned.

But, with the Exadata, the pre-configured 8 compute nodes with 14 storage cells in a full rack is configured by Oracle. Each compute node has 24 CPU cores and 96GB of memory.

Now this Exadata appliance need to be shared by multiple applications.. The complexity of multi-tenancy starts here. How to ensure Quality of Service?

1. Server pools and Instance Caging
2. Service design
3. Database Resource Manager
4. I/O Resource Manager

I think it is always good to have a database per application. Hosting multiple applications on a single database instance could be tricky with respective to the CPU allocation.

Next most challenging task is allocating the memory across multiple applications. This is one thing it is still being done manually. Automatic SGA and PGA management and memory tuning within an instance is improving but allocating a memory target for each database should be done manually.

Classifying the workload within a database using the USER NAME, SERVICE, CLIENT PROGRAM NAME, CLIENT USERNAME, MODULE, ACTION etc., parameters to a “resource consumer group” and assigning a resource consumer group to a resource manager plan is achieved using DBRM. Every user session should be put on the right consumer group dynamically based on multiple parameters rather than always putting USER1 on medium_consumer_group. If USER1 is performing an important action that session should be prioritized at a higher level.

Finally, controlling IO operations on the cells across the databases and multiple workloads from within a database also a very important activity to maintain the right Quality of Service (QoS). IO Resource manager database plan and a category plan for prioritizing the workload from within a database should be configured.

In my opinion, the performance management within the appliance world has become more complicated due to the complexity involved in the QoS and resource management. Now we have to develop applications that are best suited to use the platform features of the appliance like Exadata.

Questions to think about:
Is this going opposite of “portability”? How easy is it to port the applications from one appliance to another?

Exadata performance features

June 19, 2012

Recently I have reviewed an Exadata implementation (about 66TB data warehouse with multiple marts running on different services on a single database of full-rack Exadata V2) for performance improvements. This post tries to summarize the key points application developers / designers / DBAs should be aware of while deploying the applications on  to Oracle Exadata V2.

1. “Smart Scan” is a Cell Offloading feature that the selection / projection of an SQL is offloaded to the storage cell instead of doing that operation on the compute node after reading all the required blocks to the buffer cache. This works with Full Table Scans (FTS) and Full Index Scans when using the direct path reads. This can dramatically improve the performance of a FTS but that does not mean all the processing need to happen over FTS and all the indexes to be removed! When the single row look up need to happen or very small amount of records are read from a large table, still the index based look up is much faster than the FTS even with smart scan.

Smart Scan is more a run-time decision rather than an optimizer time decision. Smart Scan depends on the number of sessions requesting for the data, number of dirty blocks, size of the table (_small_table_threshold – by default Oracle considers 2% of buffer cache as small table threshold; this may not be good in some times! This parameter may be tweaked at session level as needed.)

On the explain plan one can see the “STORAGE” keyword on action like “TABLE ACCESS STORAGE FULL” and the statistic value in V$ views is “cell physical IO bytes eligible for predicate offload”.

To force the direct read on serial operations at a session level “_serial_direct_read” can be set to TRUE.

2. “Storage Index” is another feature that each cell builds a dynamic negative index of what data is surely not there on the cell for each column value by making a min and max value ranges that are stored on the cell for a given column. This structure is a in-memory index dynamically built after seeing multiple queries that are offloaded to the storage. This feature gives performance improvement similar to “partition pruning” on partitioned tables. To take best advantage of this feature, an ordered load of data into the table based on the most used where clause predicate columns is recommended.  The ETL processes should use a serial loading of data using “APPEND” hint into the table such that the best advantage of storage index can be achieved on SELECT statements.

3. In a data warehouse type environment, when the most of the times all the rows are accessed but every time only a subset of columns are accessed from the table, Hybrid Columnar Compression improves the performance. Using a COMPRESS FOR QUERY HIGH mode of HCC all the queries that use few columns of the table would only read the required column blocks and perform better.

It is important to consider these features during design of an application and building the application to take advantage of these features will tremendously reduce the resource consumption on the platform at the same time giving best throughput.

But, 

It is important to have correct indexing strategy, correct partitioning strategy in place even with these features to absolutely make sure the performance is predictable.  Just leaving the tables to grow large with all the history data without the right partitioning strategy will leave the performance degrade over time even with smart scan, storage indexes and HCC!