The report developing with Excel is very simple. The way is:
1. Design an interactive interface on Excel, that include: listing box, drop down box, text box, label, button, radio option, check box,etc. For example, the region listing box, the product category listing box, the process(data retrieve) button, the submit or update button.
2. Add proper VBA code and call VBA MACRO functions on these control boxes. Macro such as: ESSConnect(), ESSMenu(), ESSRetrieve(),etc
3. Then we can run the on demand interactive Excel report to make special query on the Essbase data, and navigate on different sheets.
4. The VBA code related to these control boxes can be saved as a VBA function library, so that the code can be reused on other Excel report
The next is a simple but standard sample:
Declare Function EssVConnect Lib "ESSEXCLN.XLL" (ByVal sheetName As Variant, ByVal username As Variant, ByVal password As Variant, ByVal server As Variant, ByVal application As Variant, ByVal database As Variant) As Long
Declare Function EssVGetMemberInfo Lib "ESSEXCLN.XLL" (ByVal sheetName As Variant, ByVal mbrName As Variant, ByVal action As Variant, ByVal aliases As Variant) As Variant
Declare Function EssVDisconnect Lib "ESSEXCLN.XLL" (ByVal sheetName As Variant) As Long
Declare Function EssVFreeMemberInfo Lib "ESSEXCLN.XLL" (ByRef memInfo As Variant) As Long
Const EssBottomLevel = 3
Const EssChildLevel = 1
Const username = "admin"
Const password = "password"
Const ServerName = "bobpc"
Const AppllicationName = "Sample"
Const EssbaseDataBase = "Basic"
Sub Disconnect()
sFileName = ThisWorkbook.Name
For Each wsSheet In Worksheets
X = EssVDisconnect(sFileName & wsSheet.Name)
Next wsSheet
If X = 0 Then
answer = MsgBox("Disconnected!", 64, "Connection Dialog")
End If
End Sub
Sub lstRegion_Change()
v_ListIndex = Sheet1.Shapes(2).ControlFormat.ListIndex
v_ListText = Sheet1.Shapes(2).ControlFormat.List(v_ListIndex)
Call GetBottom(v_ListText)
End Sub
Sub UnselectListBox(ByVal ListBoxNam As Variant)
End Sub
Sub Refresh()
Call GetChild("Market")
Call GetBottom("Market")
End Sub
Sub Process()
End Sub
Sub Submit()
End Sub
Sub Export()
With Sheet1
Set cb = .Shapes.AddFormControl(xlCheckBox, 10, 10, 100, 10)
cb.ControlFormat.LinkedCell = "A1"
End With
End Sub
Sub Quit()
End Sub
Sub Actual()
'MsgBox (Sheet1.Shapes.Item(3).Name)
End Sub
Sub Budget()
'MsgBox (Sheet1.Shapes(3).ControlFormat.LinkedCell
End Sub
Sub Conn()
sFileName = ThisWorkbook.Name
For Each wsSheet In Worksheets
X = EssVConnect(sFileName & wsSheet.Name, username, password, ServerName, AppllicationName, EssbaseDataBase)
Next wsSheet
If X = 0 Then
' answer = MsgBox("Connected!", 64, "Connection Dialog")
Else
answer = MsgBox("not Connected!", 64, "Connection Dialog")
End If
End Sub
Public Sub Tester()
Dim rng As Range
Dim LastRow As Long
LastRow = Cells(Rows.Count, "A").End(xlUp).Row
Set rng = Range(LastRow - 1)
rng.Select
End Sub
Sub SelectFirstToLastInColumn()
' Select From the First NonBlank to the Last Nonblank in the Column
Cells(1, 1).Select
Set TopCell = Cells(1, ActiveCell.Column)
Set BottomCell = Cells(Rows.Count, ActiveCell.Column)
If IsEmpty(TopCell) Then Set TopCell = TopCell.End(xlDown)
If IsEmpty(BottomCell) Then Set BottomCell = BottomCell.End(xlUp)
If TopCell.Row = Rows.Count And BottomCell.Row = 1 Then
ActiveCell.Select
Else
Range(TopCell, BottomCell).Select
End If
End Sub
Sub GetChild(ByVal mbrName As Variant)
Call Conn
Dim vt As Variant
Dim cbItems As Variant
Dim i As Integer
Dim pMember As String
vt = EssVGetMemberInfo(ThisWorkbook.Name & "Sheet1", mbrName, EssChildLevel, False)
If IsArray(vt) Then
cbItems = UBound(vt) + 1
For i = 0 To UBound(vt)
Sheet1.Cells(i + 1, 1) = vt(i)
Next
Else
MsgBox ("Return Value = " + Str(vt))
End If
' Select From the First NonBlank to the Last Nonblank in the Column
Cells(1, 1).Select
Set TopCell = Cells(1, ActiveCell.Column)
Set BottomCell = Cells(Rows.Count, ActiveCell.Column)
If IsEmpty(TopCell) Then Set TopCell = TopCell.End(xlDown)
If IsEmpty(BottomCell) Then Set BottomCell = BottomCell.End(xlUp)
If TopCell.Row = Rows.Count And BottomCell.Row = 1 Then
ActiveCell.Select
Else
Range(TopCell, BottomCell).Select
End If
Sheet1.Shapes(2).ControlFormat.ListFillRange = Range(TopCell, BottomCell).Address
'MsgBox (Worksheets(1).Shapes(2).Name)
X = EssVFreeMemberInfo(vt)
End Sub
Sub lstCity_Clear()
For i = 1 To 20
Sheet1.Cells(i, 2) = Null
Next
End Sub
Sub GetBottom(ByVal mbrName As Variant)
Call lstCity_Clear
Dim vt As Variant
Dim cbItems As Variant
Dim i As Integer
Dim pMember As String
vt = EssVGetMemberInfo(ThisWorkbook.Name & "Sheet1", mbrName, EssBottomLevel, False)
If IsArray(vt) Then
cbItems = UBound(vt) + 1
For i = 0 To UBound(vt)
Sheet1.Cells(i + 1, 2) = vt(i)
Next
Else
MsgBox ("Return Value = " + Str(vt))
End If
Cells(1, 2).Select
Set TopCell = Cells(1, ActiveCell.Column)
Set BottomCell = Cells(Rows.Count, ActiveCell.Column)
If IsEmpty(TopCell) Then Set TopCell = TopCell.End(xlDown)
If IsEmpty(BottomCell) Then Set BottomCell = BottomCell.End(xlUp)
If TopCell.Row = Rows.Count And BottomCell.Row = 1 Then
ActiveCell.Select
Else
Range(TopCell, BottomCell).Select
End If
Sheet1.Shapes(3).ControlFormat.ListFillRange = Range(TopCell, BottomCell).Address
X = EssVFreeMemberInfo(vt)
End Sub
11/29/09
BSO Intelligent Calculation
You can calculate the correct results by telling Essbase not to mark the calculated data blocks as
clean. The following calculation script calculates the correct results:
SET CLEARUPDATESTATUS OFF;
FIX(“New York”)
CALC DIM(Product);
ENDFIX
SET CLEARUPDATESTATUS AFTER;
FIX(“New York”)
CALC DIM(Year);
ENDFIX
With the SET CLEARUPDATESTATUS OFF command, Essbase calculates dirty data blocks but
does not to mark them as clean, unlike the SET CLEARUPDATESTATUS AFTER command.
This solution assumes that the data blocks are not marked as clean from a previous partial
calculation of the database.
You can ensure that all data blocks are calculated, regardless of their status, by disabling
Intelligent Calculation. The following calculation script calculates all specified data blocks,
regardless of their clean or dirty status:
SET UPDATECALC OFF;
FIX(“New York”)
CALC DIM(Year, Product);
ENDFIX
Because you have not used the SET CLEARUPDATESTATUS AFTER command, Essbase does
not mark calculated data blocks as clean.
clean. The following calculation script calculates the correct results:
SET CLEARUPDATESTATUS OFF;
FIX(“New York”)
CALC DIM(Product);
ENDFIX
SET CLEARUPDATESTATUS AFTER;
FIX(“New York”)
CALC DIM(Year);
ENDFIX
With the SET CLEARUPDATESTATUS OFF command, Essbase calculates dirty data blocks but
does not to mark them as clean, unlike the SET CLEARUPDATESTATUS AFTER command.
This solution assumes that the data blocks are not marked as clean from a previous partial
calculation of the database.
You can ensure that all data blocks are calculated, regardless of their status, by disabling
Intelligent Calculation. The following calculation script calculates all specified data blocks,
regardless of their clean or dirty status:
SET UPDATECALC OFF;
FIX(“New York”)
CALC DIM(Year, Product);
ENDFIX
Because you have not used the SET CLEARUPDATESTATUS AFTER command, Essbase does
not mark calculated data blocks as clean.
Optimizing Parallel Data Load Processing
Although Essbase uses parallel processing to optimize processor resources across the data load stages, processor resources are idle at times. To take advantage of these times, Essbase can further divide record processing in the preparation and write stages. To tailor parallel processing to your situation, you can use the DLTHREADSPREPARE and DLTHREADSWRITE essbase.cfg settings to tell Essbase to use additional threads during these stages.
DLTHREADSPREPARE: Specifies how many threads Essbase may use during the data load stage that codifies and organizes the data in preparation to being written to blocks in memory.
DLTHREADSWRITE: Specifies how many threads Essbase may use during the data load stage that writes data to the disk. High values may require allocation of additional cache. See Implications in Sizing the Data Cache.
For ASO, Essbase Server uses one thread with aggregate storage cache. The DLTHREADSWRITE setting is ignored.
DLSINGLETHREADPERSTAGE: Specifies that Essbase use a single thread per stage, ignoring the values in the DLTHREADSPREPARE and DLTHREADSWRITE settings
DLTHREADSPREPARE: Specifies how many threads Essbase may use during the data load stage that codifies and organizes the data in preparation to being written to blocks in memory.
DLTHREADSWRITE: Specifies how many threads Essbase may use during the data load stage that writes data to the disk. High values may require allocation of additional cache. See Implications in Sizing the Data Cache.
For ASO, Essbase Server uses one thread with aggregate storage cache. The DLTHREADSWRITE setting is ignored.
DLSINGLETHREADPERSTAGE: Specifies that Essbase use a single thread per stage, ignoring the values in the DLTHREADSPREPARE and DLTHREADSWRITE settings
11/28/09
Java - backup Essbase
We can create Java procedure that will automatically backup the Essbase database, and even recover the essbase database when necessary.
package com.essbase.samples.japi;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.io.File;
import com.essbase.api.base.EssException;
import com.essbase.api.datasource.EssSEQID;
import com.essbase.api.datasource.EssTRANSACTION;
import com.essbase.api.datasource.EssTRANSACTION_REPLAY;
import com.essbase.api.datasource.IEssCube;
import com.essbase.api.datasource.IEssOlapFileObject;
import com.essbase.api.datasource.IEssOlapServer;
import com.essbase.api.domain.IEssDomain;
import com.essbase.api.session.IEssbase;
/**
* Signs on to essbase domain,
* creates a App and Cube, backups the database(cube) and then restores it.
* In order for this sample to work in your environment, make sure to
* change the s_* variables to suit your environment.
*
* @author
* @version
*/
public class BackupAndRestoreDatabase {
// NOTE: Change the following variables to suit your setup.
private static String s_userName = "system";
private static String s_password = "password";
private static String s_olapSvrName = "localhost";
/* Possible values for s_provider:
"Embedded" or "http://localhost:13080/aps/JAPI" */
private static String s_provider = "Embedded"; // Default
private static final int FAILURE_CODE = 1;
public static void main(String[] args) {
int statusCode = 0;
IEssbase ess = null;
IEssOlapServer olapSvr = null;
try {
acceptArgs(args);
// Create JAPI instance.
ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);
// Sign On to the Provider
IEssDomain dom = ess.signOn(s_userName, s_password, false, null,s_provider);
// Open connection with OLAP server and get the cube.
olapSvr = (IEssOlapServer) dom.getOlapServer(s_olapSvrName);
olapSvr.connect();
try {
// Delete the App if it already exists
olapSvr.getApplication("BackUp").delete();
} catch (EssException x) {
// Ignore Error
}
// Create a new Application/Cube : BackUp/Basic - Copy of Sample/Basic
olapSvr.createApplication("BackUp");
dom.copyCube(s_olapSvrName, "Sample", "Basic", s_olapSvrName, "BackUp","Basic");
olapSvr.disconnect();
olapSvr.connect();
IEssCube cube = olapSvr.getApplication("BackUp").getCube("Basic");
BackUpAndRestore(cube);
System.out.println("Cube Archive and Restore Sample completed.");
// Transaction logging requires the below essbase property to be set in essbase.cfg. Choose one
// of the below ways to set it.
// TransactionLogLocation AppName DbName LogLocation NATIVE ENABLE|DISABLE
// TransactionLogLocation AppName LogLocation NATIVE ENABLE
// TransactionLogLocation LogLocation NATIVE ENABLE
// Ex: TransactionLogLocation Sample Basic D:\Hyperion\AnalyticServices-950\app\Sample\Basic NATVIE ENABLE
// TransactionLoggingAndReplay(cube);
// Delete newly created Application.
olapSvr.getApplication("BackUp").delete();
} catch (EssException x) {
System.out.println("Error: " + x.getMessage());
statusCode = FAILURE_CODE;
} finally {
// Close OLAP server connection and sign off from the domain.
try {
if (olapSvr != null && olapSvr.isConnected() == true)
olapSvr.disconnect();
} catch (EssException x) {
System.out.println("Error: " + x.getMessage());
}
try {
if (ess != null && ess.isSignedOn() == true)
ess.signOff();
} catch (EssException x) {
System.out.println("Error: " + x.getMessage());
}
}
// Set status to failure only if exception occurs and do abnormal termination
// otherwise, it will by default terminate normally
if (statusCode == FAILURE_CODE)
System.exit(FAILURE_CODE);
}
static void TransactionLoggingAndReplay(IEssCube cube) throws EssException {
Calendar cal = Calendar.getInstance();
cal.setTime(new Date());
cal.add(Calendar.DATE, -2);
cal.getTimeInMillis();
SimpleDateFormat fmt = new SimpleDateFormat("MM/dd/yyyy:HH:mm:ss");
String date = fmt.format(cal.getTime());
System.out.println("From Date specified for Transactions is :"+ date);
EssTRANSACTION[] list = cube.listTransactions((short)1, date, IEssCube.ESS_LIST_TRANSACTIONS_TOCLIENT, "");
if (list == null || list.length ==0) {
System.out.println("\nNo transactions to List or Replay since "+ date +".\n"
+"Please comment out the BackUpAndRestore(cube) function call in this sample and\n ensure you have executed a transaction like loaddata prior to running this sample.");
return;
}
for (int i = 0; i < list.length; i++) { System.out.println(list[i] +"\n"); } System.out.println("List transactions complete"); EssTRANSACTION_REPLAY replayTran = new EssTRANSACTION_REPLAY((byte)2, date, 1); EssSEQID[] seqIds = new EssSEQID[1]; seqIds[0] = new EssSEQID(list[0].getSeq_id(), list[0].getSeq_id_upper(), 1, list[0].getSeq_id_upper()); cube.replayTransactions(replayTran, seqIds); System.out.println("Relplay transactions complete"); } static void BackUpAndRestore(IEssCube cube) throws EssException { cube.loadData(true, false, "Product Market Actual Sales Jan 4469\n" + "Product Market Actual Sales Feb 42494"); String ArchiveFile = System.getProperty("java.io.tmpdir") + "demobasic.arc"; cube.archiveDatabase(ArchiveFile, "", true); // Take backup. cube.loadData(IEssOlapFileObject.TYPE_RULES, null, IEssOlapFileObject.TYPE_TEXT, "Calcdat", false); String[] Src = null; String[] Dest = null; // Unload Database before restoring. do { try { Thread.sleep(5000); cube.stop(); break; } catch(EssException x){ // If error occurs in unloading database because database is in use(Error #1013113), // wait for 5 sec. and try again. if(x.getNativeCode() == 1013113) continue; else break; } catch (Exception x) { break; } } while (true); cube.restoreDatabase(ArchiveFile, false, Src, Dest); // Restore database. (new File (ArchiveFile)).delete(); } static void acceptArgs(String[] args) throws EssException { if (args.length >= 4) {
s_userName = args[0];
s_password = args[1];
s_olapSvrName = args[2];
s_provider = args[3]; //PROVIDER
} else if (args.length != 0) {
System.err.println("ERROR: Incorrect Usage of this sample.");
System.err.println("Usage: java " + BackupAndRestoreDatabase.class.getName()
+ " ");
System.exit(1); // Simply end
}
}
}
package com.essbase.samples.japi;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.io.File;
import com.essbase.api.base.EssException;
import com.essbase.api.datasource.EssSEQID;
import com.essbase.api.datasource.EssTRANSACTION;
import com.essbase.api.datasource.EssTRANSACTION_REPLAY;
import com.essbase.api.datasource.IEssCube;
import com.essbase.api.datasource.IEssOlapFileObject;
import com.essbase.api.datasource.IEssOlapServer;
import com.essbase.api.domain.IEssDomain;
import com.essbase.api.session.IEssbase;
/**
* Signs on to essbase domain,
* creates a App and Cube, backups the database(cube) and then restores it.
* In order for this sample to work in your environment, make sure to
* change the s_* variables to suit your environment.
*
* @author
* @version
*/
public class BackupAndRestoreDatabase {
// NOTE: Change the following variables to suit your setup.
private static String s_userName = "system";
private static String s_password = "password";
private static String s_olapSvrName = "localhost";
/* Possible values for s_provider:
"Embedded" or "http://localhost:13080/aps/JAPI" */
private static String s_provider = "Embedded"; // Default
private static final int FAILURE_CODE = 1;
public static void main(String[] args) {
int statusCode = 0;
IEssbase ess = null;
IEssOlapServer olapSvr = null;
try {
acceptArgs(args);
// Create JAPI instance.
ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);
// Sign On to the Provider
IEssDomain dom = ess.signOn(s_userName, s_password, false, null,s_provider);
// Open connection with OLAP server and get the cube.
olapSvr = (IEssOlapServer) dom.getOlapServer(s_olapSvrName);
olapSvr.connect();
try {
// Delete the App if it already exists
olapSvr.getApplication("BackUp").delete();
} catch (EssException x) {
// Ignore Error
}
// Create a new Application/Cube : BackUp/Basic - Copy of Sample/Basic
olapSvr.createApplication("BackUp");
dom.copyCube(s_olapSvrName, "Sample", "Basic", s_olapSvrName, "BackUp","Basic");
olapSvr.disconnect();
olapSvr.connect();
IEssCube cube = olapSvr.getApplication("BackUp").getCube("Basic");
BackUpAndRestore(cube);
System.out.println("Cube Archive and Restore Sample completed.");
// Transaction logging requires the below essbase property to be set in essbase.cfg. Choose one
// of the below ways to set it.
// TransactionLogLocation AppName DbName LogLocation NATIVE ENABLE|DISABLE
// TransactionLogLocation AppName LogLocation NATIVE ENABLE
// TransactionLogLocation LogLocation NATIVE ENABLE
// Ex: TransactionLogLocation Sample Basic D:\Hyperion\AnalyticServices-950\app\Sample\Basic NATVIE ENABLE
// TransactionLoggingAndReplay(cube);
// Delete newly created Application.
olapSvr.getApplication("BackUp").delete();
} catch (EssException x) {
System.out.println("Error: " + x.getMessage());
statusCode = FAILURE_CODE;
} finally {
// Close OLAP server connection and sign off from the domain.
try {
if (olapSvr != null && olapSvr.isConnected() == true)
olapSvr.disconnect();
} catch (EssException x) {
System.out.println("Error: " + x.getMessage());
}
try {
if (ess != null && ess.isSignedOn() == true)
ess.signOff();
} catch (EssException x) {
System.out.println("Error: " + x.getMessage());
}
}
// Set status to failure only if exception occurs and do abnormal termination
// otherwise, it will by default terminate normally
if (statusCode == FAILURE_CODE)
System.exit(FAILURE_CODE);
}
static void TransactionLoggingAndReplay(IEssCube cube) throws EssException {
Calendar cal = Calendar.getInstance();
cal.setTime(new Date());
cal.add(Calendar.DATE, -2);
cal.getTimeInMillis();
SimpleDateFormat fmt = new SimpleDateFormat("MM/dd/yyyy:HH:mm:ss");
String date = fmt.format(cal.getTime());
System.out.println("From Date specified for Transactions is :"+ date);
EssTRANSACTION[] list = cube.listTransactions((short)1, date, IEssCube.ESS_LIST_TRANSACTIONS_TOCLIENT, "");
if (list == null || list.length ==0) {
System.out.println("\nNo transactions to List or Replay since "+ date +".\n"
+"Please comment out the BackUpAndRestore(cube) function call in this sample and\n ensure you have executed a transaction like loaddata prior to running this sample.");
return;
}
for (int i = 0; i < list.length; i++) { System.out.println(list[i] +"\n"); } System.out.println("List transactions complete"); EssTRANSACTION_REPLAY replayTran = new EssTRANSACTION_REPLAY((byte)2, date, 1); EssSEQID[] seqIds = new EssSEQID[1]; seqIds[0] = new EssSEQID(list[0].getSeq_id(), list[0].getSeq_id_upper(), 1, list[0].getSeq_id_upper()); cube.replayTransactions(replayTran, seqIds); System.out.println("Relplay transactions complete"); } static void BackUpAndRestore(IEssCube cube) throws EssException { cube.loadData(true, false, "Product Market Actual Sales Jan 4469\n" + "Product Market Actual Sales Feb 42494"); String ArchiveFile = System.getProperty("java.io.tmpdir") + "demobasic.arc"; cube.archiveDatabase(ArchiveFile, "", true); // Take backup. cube.loadData(IEssOlapFileObject.TYPE_RULES, null, IEssOlapFileObject.TYPE_TEXT, "Calcdat", false); String[] Src = null; String[] Dest = null; // Unload Database before restoring. do { try { Thread.sleep(5000); cube.stop(); break; } catch(EssException x){ // If error occurs in unloading database because database is in use(Error #1013113), // wait for 5 sec. and try again. if(x.getNativeCode() == 1013113) continue; else break; } catch (Exception x) { break; } } while (true); cube.restoreDatabase(ArchiveFile, false, Src, Dest); // Restore database. (new File (ArchiveFile)).delete(); } static void acceptArgs(String[] args) throws EssException { if (args.length >= 4) {
s_userName = args[0];
s_password = args[1];
s_olapSvrName = args[2];
s_provider = args[3]; //PROVIDER
} else if (args.length != 0) {
System.err.println("ERROR: Incorrect Usage of this sample.");
System.err.println("Usage: java " + BackupAndRestoreDatabase.class.getName()
+ "
System.exit(1); // Simply end
}
}
}
Load huge outline member for ASO
I am building a ASO cube with Essbase(9.3.1) for shipment industry.
outline involve 3 large munber of dimension members:
1. DIM_SHIPPER (1,465,000), max generation is 5
2. NA_STATE (162,300), max generation is 3
3. DIM_CONSINEE (875,200), max generation is 5
4.... others dimension member maximum no more then 100,000. Top 2 dimensions are 80000 and 60000, others only thounsand level.
outline total including 10 dimension (not involve the account dimension).
Question:
1. After I finished loading all dimensions except DIM_SHIPPER, NA_STATE,DIM_CONSINEE. I found I can't open the outline with edit model, and Essbase report 'out of memory'error. At the same time I can't use ODI to load dimension anymore, ODI always report network error and time out error. (I think time out error is because long time no response from Essbase, am I correct?)
Answer:
1. Since can't use ODI to build dimension, so I change the loading ways and use flat file to build dimension. This ways is work, can build dimension in Essbase, but the performance is very poor. I found need to cost nearly 8 hours to finished DIM_CONSINEE dimension building.
2. Check for Disk Space. We usually get this error when there is insufficient disk space to create .pag files.
3. Increase your agent thread count, so research these essbase.cfg settings. I know I have seen that issue in the past with the agent being too busy, and adjusting threads greatly helped.
AGENTTHREADS
AGTSVRCONNECTIONS
SERVERTHREADS
Also, do not make AgtSvrConnections greater than AgentThreads. Note that using more threads will consume more memory, and is very specific to your OS so you probably should ask Oracle support what they recommend based on your specific environment before you change it.
4. Following are the minimum and max settings:
AGENTDELAY 20 60
AGENTTHREADS 2 500
AGTSVRCONNECTIONS 1 7
-----------
Answers:
outline involve 3 large munber of dimension members:
1. DIM_SHIPPER (1,465,000), max generation is 5
2. NA_STATE (162,300), max generation is 3
3. DIM_CONSINEE (875,200), max generation is 5
4.... others dimension member maximum no more then 100,000. Top 2 dimensions are 80000 and 60000, others only thounsand level.
outline total including 10 dimension (not involve the account dimension).
Question:
1. After I finished loading all dimensions except DIM_SHIPPER, NA_STATE,DIM_CONSINEE. I found I can't open the outline with edit model, and Essbase report 'out of memory'error. At the same time I can't use ODI to load dimension anymore, ODI always report network error and time out error. (I think time out error is because long time no response from Essbase, am I correct?)
Answer:
1. Since can't use ODI to build dimension, so I change the loading ways and use flat file to build dimension. This ways is work, can build dimension in Essbase, but the performance is very poor. I found need to cost nearly 8 hours to finished DIM_CONSINEE dimension building.
2. Check for Disk Space. We usually get this error when there is insufficient disk space to create .pag files.
3. Increase your agent thread count, so research these essbase.cfg settings. I know I have seen that issue in the past with the agent being too busy, and adjusting threads greatly helped.
AGENTTHREADS
AGTSVRCONNECTIONS
SERVERTHREADS
Also, do not make AgtSvrConnections greater than AgentThreads. Note that using more threads will consume more memory, and is very specific to your OS so you probably should ask Oracle support what they recommend based on your specific environment before you change it.
4. Following are the minimum and max settings:
AGENTDELAY 20 60
AGENTTHREADS 2 500
AGTSVRCONNECTIONS 1 7
-----------
Answers:
11/25/09
Java Customized Calc Function
The essbase already have many functions for calculation. We can create customized calculation functions by Java. Steps:
1. Create a Java function,CalcFunc.java
public class CalcFunc {
public static double sum (double[] data) {
int i, n = data.length;
double sum = 0.0d;
for (i=0; i< n ; i++)
{
double d = data [i];
sum = sum + d;
}
return sum;
}
}
2. compile,run with javac CalcFunc.java, and generate CalcFunc.class
javac CalcFunc.java
C:\app\bob\product\11.1.0\db_1\jdk\bin\javac.exe C:\Hyperion\CustJava\CalcFunc.java
3. Put the class file in the jar file
jar cf jar_filename class_filename
jar cf CalcFunc.jar CalcFunc.class
4.On the computer running Essbase Server, copy the .jar file to one of the following directories (if the directory does not exist, create it):
For .jar files containing global custom-defined functions:
ARBORPATH/java/udf/
C:\Hyperion\products\Essbase\EssbaseServer\java\udf
For .jar files to be used only with specific applications:
ARBORPATH/app/AppName/udf/
5. Register
create function Sample.'@JSUM'
as 'CalcFunc.sum'
spec '@JSUM(memberRange)'
comment 'adds list of input members';
1. Create a Java function,CalcFunc.java
public class CalcFunc {
public static double sum (double[] data) {
int i, n = data.length;
double sum = 0.0d;
for (i=0; i< n ; i++)
{
double d = data [i];
sum = sum + d;
}
return sum;
}
}
2. compile,run with javac CalcFunc.java, and generate CalcFunc.class
javac CalcFunc.java
C:\app\bob\product\11.1.0\db_1\jdk\bin\javac.exe C:\Hyperion\CustJava\CalcFunc.java
3. Put the class file in the jar file
jar cf jar_filename class_filename
jar cf CalcFunc.jar CalcFunc.class
4.On the computer running Essbase Server, copy the .jar file to one of the following directories (if the directory does not exist, create it):
For .jar files containing global custom-defined functions:
ARBORPATH/java/udf/
C:\Hyperion\products\Essbase\EssbaseServer\java\udf
For .jar files to be used only with specific applications:
ARBORPATH/app/AppName/udf/
5. Register
create function Sample.'@JSUM'
as 'CalcFunc.sum'
spec '@JSUM(memberRange)'
comment 'adds list of input members';
MaxL Perl Functions
1. connect (user, password, host), Sample:
my $dbh = Essbase->connect("user","password", "host");
2. do (statement); a MaxL statement to be passed to the Essbase Server
$dbh->do("display user");
Where "display user" is a valid MaxL statement
3. pop_msg(); Navigates through MaxL status messages one at a time.
Arguments: none.Returns: a list of the form (, , )
4. fetch_desc(); Returns a reference to a row of query results and a reference to a corresponding row of datatypes for the query results.
($column_name, $datatypes) = $dbh->fetch_desc();
5. fetch_row();Returns a reference to a row of query results in a MaxL output table, as a list
6. disconnect();
my $dbh = Essbase->connect("user","password", "host");
2. do (statement); a MaxL statement to be passed to the Essbase Server
$dbh->do("display user");
Where "display user" is a valid MaxL statement
3. pop_msg(); Navigates through MaxL status messages one at a time.
Arguments: none.Returns: a list of the form (
4. fetch_desc(); Returns a reference to a row of query results and a reference to a corresponding row of datatypes for the query results.
($column_name, $datatypes) = $dbh->fetch_desc();
5. fetch_row();Returns a reference to a row of query results in a MaxL output table, as a list
6. disconnect();
MaxL Perl Module
Windows: Download Perl source from http://www.cpan.org/ and build it yourself. You may use ActivePerl, available from http://www.activestate.com/. Before you install the Essbase.pm extension to Perl, ensure that:
You have Perl 5.6 (or higher) installed on your system.You have Microsoft Visual C++ version 6 or higher installed on your system.The Essbase Server is either installed locally, or you have at least the Runtime Client installed and your system's environment is set up to access a remote Essbase Server. Your system should have an environment variable $ESSBASEPATH pointing to the root directory of the Essbase Server installation. In addition, %ESSBASEPATH%\Bin should be included in your path variable. Note:MaxL Perl Module can only be used with the same version Essbase Server.
You have Perl 5.6 (or higher) installed on your system.You have Microsoft Visual C++ version 6 or higher installed on your system.The Essbase Server is either installed locally, or you have at least the Runtime Client installed and your system's environment is set up to access a remote Essbase Server. Your system should have an environment variable $ESSBASEPATH pointing to the root directory of the Essbase Server installation. In addition, %ESSBASEPATH%\Bin should be included in your path variable. Note:MaxL Perl Module can only be used with the same version Essbase Server.
MaxL Shell Commands
Spool on/off
Set Display Column Width
Set Message Level
Set Timestamp
Echo
Shell Escape
Nesting
Error Checking and Branching
Version
Logout
Exit
Set Display Column Width
Set Message Level
Set Timestamp
Echo
Shell Escape
Nesting
Error Checking and Branching
Version
Logout
Exit
11/24/09
MaxL Automation Backup/Recovery
This paper is for the auto backup and recovery process for LDAP Shared Service and Essbase database.
1. In the essbase.cfg file, add:
SPLITARCHIVEFILE TRUE
TRANSACTIONLOGDATALOADARCHIVE SERVER_CLIENT
The 1st command will split big archive file into smaller one automatically. The 2nd command will enable the transaction logging.
2. backupLDAP.bat
The code inside:
@echo off
REM Backup LDAP with HOTBackup
c:\Hyperion\products\Foundation\server\scripts\backup.bat C:\Hyperion\backup\HSS_backup
3. RecoveryLDAP.bat, the code inside:
@echo off
REM Backup LDAP with HOTBackup
c:\Hyperion\products\Foundation\server\scripts\recover.bat C:\Hyperion\backup\HSS_backup
4. BackupEssbaseMaxL.msh
spool on to 'c:\Hyperion\Logs\EssbaseArchive.log';
login $1 $2 on $3;
alter database Sample.Basic force archive to file 'C:\Hyperion\backup\SampleBasic';
alter database Demo.Basic force archive to file 'c:\Hyperion\backup\DemoBasic';
spool off;
5. backupEssbaseBSO.bat
echo off
REM Backup BSO Essbase
set uid=admin
set pwd=password
set svr=bobpc
essmsh "C:\Hyperion\auto\BackupEssbaseMaxL.msh" %uid% %pwd% %svr% %dt% %tm%
ren C:\Hyperion\backup\SampleBasic SampleBasic%date:~4,2%-%date:~7,2%-%date:~10%.arc
ren C:\Hyperion\backup\DemoBasic DemoBasic%date:~4,2%-%date:~7,2%-%date:~10%.arc
The idea is backup with general name, and use bat script rename the backup file with time stamp.
6. RecoveryEssbaseMaxL.msh
spool on to 'c:\Hyperion\Logs\EssbaseArchive.log';
alter database Sample.Basic force restore from file 'C:\Hyperion\backup\SampleBasic.arc';
alter database Demo.Basic force restore from file 'c:\Hyperion\backup\DemoBasic.arc';
alter database Sample.Basic replay transactions using sequence_id_range 2 to 2;
alter database Demo.Basic replay transactions using sequence_id_range 2 to 2;
spool off;
Please select the sequence id range by query
query database Sample.Basic list transactions;
7. RecoveryEssbaseBSO.bat
@echo off
REM Recovery BSO Essbase
essmsh "C:\Hyperion\auto\RecoveryEssbaseMaxL.msh"
The bat file for backup can be scheduled to run daily using Windows Scheduler.
1. In the essbase.cfg file, add:
SPLITARCHIVEFILE TRUE
TRANSACTIONLOGDATALOADARCHIVE SERVER_CLIENT
The 1st command will split big archive file into smaller one automatically. The 2nd command will enable the transaction logging.
2. backupLDAP.bat
The code inside:
@echo off
REM Backup LDAP with HOTBackup
c:\Hyperion\products\Foundation\server\scripts\backup.bat C:\Hyperion\backup\HSS_backup
3. RecoveryLDAP.bat, the code inside:
@echo off
REM Backup LDAP with HOTBackup
c:\Hyperion\products\Foundation\server\scripts\recover.bat C:\Hyperion\backup\HSS_backup
4. BackupEssbaseMaxL.msh
spool on to 'c:\Hyperion\Logs\EssbaseArchive.log';
login $1 $2 on $3;
alter database Sample.Basic force archive to file 'C:\Hyperion\backup\SampleBasic';
alter database Demo.Basic force archive to file 'c:\Hyperion\backup\DemoBasic';
spool off;
5. backupEssbaseBSO.bat
echo off
REM Backup BSO Essbase
set uid=admin
set pwd=password
set svr=bobpc
essmsh "C:\Hyperion\auto\BackupEssbaseMaxL.msh" %uid% %pwd% %svr% %dt% %tm%
ren C:\Hyperion\backup\SampleBasic SampleBasic%date:~4,2%-%date:~7,2%-%date:~10%.arc
ren C:\Hyperion\backup\DemoBasic DemoBasic%date:~4,2%-%date:~7,2%-%date:~10%.arc
The idea is backup with general name, and use bat script rename the backup file with time stamp.
6. RecoveryEssbaseMaxL.msh
spool on to 'c:\Hyperion\Logs\EssbaseArchive.log';
alter database Sample.Basic force restore from file 'C:\Hyperion\backup\SampleBasic.arc';
alter database Demo.Basic force restore from file 'c:\Hyperion\backup\DemoBasic.arc';
alter database Sample.Basic replay transactions using sequence_id_range 2 to 2;
alter database Demo.Basic replay transactions using sequence_id_range 2 to 2;
spool off;
Please select the sequence id range by query
query database Sample.Basic list transactions;
7. RecoveryEssbaseBSO.bat
@echo off
REM Recovery BSO Essbase
essmsh "C:\Hyperion\auto\RecoveryEssbaseMaxL.msh"
The bat file for backup can be scheduled to run daily using Windows Scheduler.
ASO - add huge dim members
ASO always have a lot of members. the most polular methods are generation reference method and children reference method.
1. Create some rule files
2. Create some data files, the data files are not for data loading, but for member creation. Check the consistence between the data file and the rule files
3. If there are too much data members, you may have to use a computer language(such as JavaScript) to generate the better formated data files from a more basic original data file
4. Use Maxl Import command to load data to buffer from the data files in parallel
5. Use Maxl import command to load data from buffer to essbase
import database AsoSamp.Sample data connect as TBC identified by 'password' using multiple rules_file 'rule1','rule2' to load_buffer_block starting with buffer_id 100 on error write to "error.txt";
import database AsoSamp.Sample data from load_buffer with buffer_id 1, 2;
1. Create some rule files
2. Create some data files, the data files are not for data loading, but for member creation. Check the consistence between the data file and the rule files
3. If there are too much data members, you may have to use a computer language(such as JavaScript) to generate the better formated data files from a more basic original data file
4. Use Maxl Import command to load data to buffer from the data files in parallel
5. Use Maxl import command to load data from buffer to essbase
import database AsoSamp.Sample data connect as TBC identified by 'password' using multiple rules_file 'rule1','rule2' to load_buffer_block starting with buffer_id 100 on error write to "error.txt";
import database AsoSamp.Sample data from load_buffer with buffer_id 1, 2;
11/22/09
V11 Service Starting Sequence
The next is the sequence to correctly start the EAS for version 11:
1. Start the RLDB that hold the reporsitory of LDAP, Shared Service, and Essbase Server, in Windows system, it is related the next 3 services:
OracleOraDb11g_home1ConfigurationManager
OracleServiceORCL
OracleOraDb11g_home1TNSListener
2. Start the LDAP service, and the Hyperion Shared Service
Hyperion Foundation OpenLDAP
Hyperion Foundation Shared Services - Web Application
3. Start Essbase Server
Hyperion Essbase Services 11.1.1 - hypservice_1
4. Start the Essbase Administration Service
Hyperion Administration Services - Web Application
http://hyperionexpert.blogspot.com/
1. Start the RLDB that hold the reporsitory of LDAP, Shared Service, and Essbase Server, in Windows system, it is related the next 3 services:
OracleOraDb11g_home1ConfigurationManager
OracleServiceORCL
OracleOraDb11g_home1TNSListener
2. Start the LDAP service, and the Hyperion Shared Service
Hyperion Foundation OpenLDAP
Hyperion Foundation Shared Services - Web Application
3. Start Essbase Server
Hyperion Essbase Services 11.1.1 - hypservice_1
4. Start the Essbase Administration Service
Hyperion Administration Services - Web Application
http://hyperionexpert.blogspot.com/
MDX - Open Inventory
WITH
MEMBER [Measures].[Starting Inventory] AS
'
IIF (
IsLeaf (Year.CurrentMember),
[Measures].[Opening Inventory],
([Measures].[Opening Inventory],
OpeningPeriod (
[Year].Levels(0),
[Year].CurrentMember
))
)'
MEMBER [Measures].[Closing Inventory] AS
'
IIF (
Isleaf(Year.CurrentMember),
[Measures].[Ending Inventory],
([Measures].[Closing Inventory],
ClosingPeriod (
[Year].Levels(0),
[Year].CurrentMember
)
)
)'
SELECT
CrossJoin (
{ [100-10] },
{ [Measures].[Starting Inventory], [Measures].[Closing Inventory] }
)
ON COLUMNS,
Hierarchize ( [Year].Members , POST)
ON ROWS
FROM Sample.Basic
returns the grid:
MEMBER [Measures].[Starting Inventory] AS
'
IIF (
IsLeaf (Year.CurrentMember),
[Measures].[Opening Inventory],
([Measures].[Opening Inventory],
OpeningPeriod (
[Year].Levels(0),
[Year].CurrentMember
))
)'
MEMBER [Measures].[Closing Inventory] AS
'
IIF (
Isleaf(Year.CurrentMember),
[Measures].[Ending Inventory],
([Measures].[Closing Inventory],
ClosingPeriod (
[Year].Levels(0),
[Year].CurrentMember
)
)
)'
SELECT
CrossJoin (
{ [100-10] },
{ [Measures].[Starting Inventory], [Measures].[Closing Inventory] }
)
ON COLUMNS,
Hierarchize ( [Year].Members , POST)
ON ROWS
FROM Sample.Basic
returns the grid:
MDX Query - BottomPercent
WITH
SET [Lowest 5% products] AS
'BottomPercent (
{ [Product].members },
5,
([Measures].[Sales], [Year].[Qtr2])
)'
MEMBER
[Product].[Sum of all lowest prods] AS
'Sum ( [Lowest 5% products] )'
MEMBER [Product].[Percent that lowest sellers hold of all product sales] AS
'Sum ( [Lowest 5% products] ) / [Product] '
SELECT
{[Year].[Qtr2].children}
on columns,
{
[Lowest 5% products],
[Product].[Sum of all lowest prods],
[Product],
[Product].[Percent that lowest sellers hold of all product sales]
}
on rows
FROM Sample.Basic
WHERE ([Measures].[Sales])
SET [Lowest 5% products] AS
'BottomPercent (
{ [Product].members },
5,
([Measures].[Sales], [Year].[Qtr2])
)'
MEMBER
[Product].[Sum of all lowest prods] AS
'Sum ( [Lowest 5% products] )'
MEMBER [Product].[Percent that lowest sellers hold of all product sales] AS
'Sum ( [Lowest 5% products] ) / [Product] '
SELECT
{[Year].[Qtr2].children}
on columns,
{
[Lowest 5% products],
[Product].[Sum of all lowest prods],
[Product],
[Product].[Percent that lowest sellers hold of all product sales]
}
on rows
FROM Sample.Basic
WHERE ([Measures].[Sales])
11/21/09
Increase Data Load Performance
If you use multiple import database data MaxL statements to load data values to aggregate storage databases, you can significantly improve performance by loading values to a temporary Data Load Buffer first, with a final write to storage after all data sources have been read.
For ASO database, the data load method is:
1. Create Data Load Buffer
2. Import data to data load buffer, many importing process can happen in the same time.
3. Export data from data load buffer to ASO database.
MaxL Code:
alter database AsoSamp.Sample initialize load_buffer with buffer_id 1;
import database AsoSamp.Sample data from server data_file 'file_1.txt' to load_buffer with buffer_id 1 on error abort;
import database AsoSamp.Sample data from server data_file 'file_2.dat' using server rules_file ‘rule’ to load_buffer with buffer_id 1 on error abort;
import database AsoSamp.Sample data from server excel data_file 'file_3.xls' to load_buffer with buffer_id 1 on error abort;
import database AsoSamp.Sample data from load_buffer with buffer_id 1;
By default, when cells with identical keys are loaded into the same data load buffer, Essbase resolves the cell conflict by adding the values together.
To create a data load buffer that combines duplicate cells by accepting the value of the cell that was loaded last into the load buffer, use the alter database MaxL statement with the aggregate_use_last grammar.
For example:
alter database AsoSamp.Sample initialize load_buffer with buffer_id 1 property aggregate_use_last;
query database appname.dbname list load_buffers;
For example, to create a slice by overriding values (the default), use this statement:
import database AsoSamp.Sample data from load_buffer with buffer_id 1 override values create slice;
alter database AsoSamp.Sample merge all data;
For ASO database, the data load method is:
1. Create Data Load Buffer
2. Import data to data load buffer, many importing process can happen in the same time.
3. Export data from data load buffer to ASO database.
MaxL Code:
alter database AsoSamp.Sample initialize load_buffer with buffer_id 1;
import database AsoSamp.Sample data from server data_file 'file_1.txt' to load_buffer with buffer_id 1 on error abort;
import database AsoSamp.Sample data from server data_file 'file_2.dat' using server rules_file ‘rule’ to load_buffer with buffer_id 1 on error abort;
import database AsoSamp.Sample data from server excel data_file 'file_3.xls' to load_buffer with buffer_id 1 on error abort;
import database AsoSamp.Sample data from load_buffer with buffer_id 1;
By default, when cells with identical keys are loaded into the same data load buffer, Essbase resolves the cell conflict by adding the values together.
To create a data load buffer that combines duplicate cells by accepting the value of the cell that was loaded last into the load buffer, use the alter database MaxL statement with the aggregate_use_last grammar.
For example:
alter database AsoSamp.Sample initialize load_buffer with buffer_id 1 property aggregate_use_last;
query database appname.dbname list load_buffers;
For example, to create a slice by overriding values (the default), use this statement:
import database AsoSamp.Sample data from load_buffer with buffer_id 1 override values create slice;
alter database AsoSamp.Sample merge all data;
ASO Data-Time Dimension
Note: It is better to delete the dimension and use the Create Date-Time Dimension wizard to recreate it with the changes built in by the wizard, particularly if changes involve adding or removing members. It is risky if simply delete or add a member for Date-Time dimension.
Linked attribute dimensions can be associated only with the date-time dimension. Linked attribute dimension can be built up when in the process of creating Date-Time Dimensions using wizard.
Linked attribute dimensions can be associated only with the date-time dimension. Linked attribute dimension can be built up when in the process of creating Date-Time Dimensions using wizard.
Write data from BSO to ASO
BSO can have "write back" functionality to any level, and BSO can perform complex calculations. ASO is for large aggregation focused databases with many dimensions and many members. Sometimes, we want to use the advantage of both ASO and BSO.
In the world where I can now make an ASO database the source of a partition, I can take advantage of the BSO strengths (write back to any level, powerful calculation engine) and then source this information to a consolidated ASO database that maybe has the volumes of detail from other sources.
Note - the new Hyperion Profitability and Cost Management solution uses this model: BSO for allocation calcs and loads to an ASO cube for reporting.
Steps:
. Create the BSO database in a separate application from the one in which the ASO database is located.Typically, the block storage database contains a subset of the dimensions in the aggregate storage database.
. Create a transparent partition based on where you want the data to be stored. Make the block storage database the target and the aggregate storage database the source.
In the world where I can now make an ASO database the source of a partition, I can take advantage of the BSO strengths (write back to any level, powerful calculation engine) and then source this information to a consolidated ASO database that maybe has the volumes of detail from other sources.
Note - the new Hyperion Profitability and Cost Management solution uses this model: BSO for allocation calcs and loads to an ASO cube for reporting.
Steps:
. Create the BSO database in a separate application from the one in which the ASO database is located.Typically, the block storage database contains a subset of the dimensions in the aggregate storage database.
. Create a transparent partition based on where you want the data to be stored. Make the block storage database the target and the aggregate storage database the source.
Optimizing ASO Outline Page
Aggregate storage database outline is pageable, therefore, ASO can put huge members in memory that will significantly increase the running performance.Depending on how you want to balance memory usage and data retrieval time, you can customize outline paging for aggregate storage outlines by using one or more of the following settings in the essbase.cfg file:
● PRELOADMEMBERNAMESPACE to turn off preloading of the member namespace.
● PRELOADALIASNAMESPACE to turn off preloading of the alias namespace.
● Compact outline from time to time, that is because when a member is deleted, it is not really deleted inside the file, it is actually marked as 'deleted', if you don't compact the outline, it will become huger and huger.
● PRELOADMEMBERNAMESPACE to turn off preloading of the member namespace.
● PRELOADALIASNAMESPACE to turn off preloading of the alias namespace.
● Compact outline from time to time, that is because when a member is deleted, it is not really deleted inside the file, it is actually marked as 'deleted', if you don't compact the outline, it will become huger and huger.
Select Compression Dimension
The choice of compression dimension can significantly affect ASO performance.
Sample MaxL command to query the Compression Estimation Statistics:
query database 'ASOsamp'.'Sample' list aggregate_storage compression_info;
Principles:
. The candidate for compression dimension should not have too many "Stored level 0 members".
. Average bundle fill:is the average number of values stored in the groups. It is between 1 to 16. 16 is the best, 1 is the worst. For the candidate dimension, it should have the greatest value compared to other dimensions.
. Expected level 0 size: This field indicates the estimated size of the compressed database. A smaller expected level 0 size indicates that choosing this dimension is expected to enable better compression.
Sample MaxL command to query the Compression Estimation Statistics:
query database 'ASOsamp'.'Sample' list aggregate_storage compression_info;
Principles:
. The candidate for compression dimension should not have too many "Stored level 0 members".
. Average bundle fill:is the average number of values stored in the groups. It is between 1 to 16. 16 is the best, 1 is the worst. For the candidate dimension, it should have the greatest value compared to other dimensions.
. Expected level 0 size: This field indicates the estimated size of the compressed database. A smaller expected level 0 size indicates that choosing this dimension is expected to enable better compression.
Essbase ASO Perfromance
● Use stored hierarchies (rather than dynamic hierarchies) as much as possible.
● Use alternate hierarchies (shared members) only when necessary.
● Minimize the number of hierarchies. (For example, each additional stored hierarchy slows down view selection and potentially increases the size of the aggregated data).
● If a hierarchy is a small subset of the first hierarchy, consider making the small hierarchy a dynamic hierarchy. Considerations include how often the hierarchy data is queried and the query performance impact when it is dynamically queried at the time of retrieval.
● The performance of attributes is the same as for members on a stored hierarchy.
● The higher the association level of an attribute to the base member, the faster the retrieval query
------------------------------------------------------------------
● Convert BSO to ASO in case, ASO has much faster running performance compared to BSO. ASO cannot include Calculation Script however. Must use migration wizard for this transmision, besides, as soon as it is transferred to ASO, it cannot be transferred back t BSO.
● Use alternate hierarchies (shared members) only when necessary.
● Minimize the number of hierarchies. (For example, each additional stored hierarchy slows down view selection and potentially increases the size of the aggregated data).
● If a hierarchy is a small subset of the first hierarchy, consider making the small hierarchy a dynamic hierarchy. Considerations include how often the hierarchy data is queried and the query performance impact when it is dynamically queried at the time of retrieval.
● The performance of attributes is the same as for members on a stored hierarchy.
● The higher the association level of an attribute to the base member, the faster the retrieval query
------------------------------------------------------------------
● Convert BSO to ASO in case, ASO has much faster running performance compared to BSO. ASO cannot include Calculation Script however. Must use migration wizard for this transmision, besides, as soon as it is transferred to ASO, it cannot be transferred back t BSO.
11/20/09
Bulk User creation in Essbase
How can we create bulk users in Essbase ? say if there are 500 users need to be created at a time. what is the technique ?
Suppose you have 500 users in c:\user.csv, you can create batch of MaxL command using next JavaScript code. Copy the below code and name as gm.js, execute gm.js in Windows.
var fso = new ActiveXObject("Scripting.FileSystemObject");
var rs = fso.OpenTextFile("C:\ \user.csv");
var fso1 = new ActiveXObject("Scripting.FileSystemObject");
var ws = fso1.CreateTextFile("C:\ \MaxL.txt");
for(var i=1;i<=500;i++)
{
username = rs.ReadLine();
var x="create user " + username + "identified by " +"'password'" +" member of group " + "'testgroup'" ;
ws.WriteLine(x);
}
rs.Close();
ws.close();
Suppose you have 500 users in c:\user.csv, you can create batch of MaxL command using next JavaScript code. Copy the below code and name as gm.js, execute gm.js in Windows.
var fso = new ActiveXObject("Scripting.FileSystemObject");
var rs = fso.OpenTextFile("C:\ \user.csv");
var fso1 = new ActiveXObject("Scripting.FileSystemObject");
var ws = fso1.CreateTextFile("C:\ \MaxL.txt");
for(var i=1;i<=500;i++)
{
username = rs.ReadLine();
var x="create user " + username + "identified by " +"'password'" +" member of group " + "'testgroup'" ;
ws.WriteLine(x);
}
rs.Close();
ws.close();
Calculation Script Tuning
Case that runs slow:
We have a big database and the cube has been in a 9.3.1 environment.
This is a 10 dimensional cube with 2 dense dimensions.
We have a script running in more than 2 days, which has commands :
SET CALCPARALLEL 4;
SET CALCTASKDIMS 3;
SET UPDATECALC OFF;
SET AGGMISSG ON;
SET FRMLBOTTOMUP ON;
FIX("21000", "22600", @REMOVE(@DESCENDANTS("All Products"), @DESCENDANTS("Reclaimed Rubber")), @REMOVE(@DESCENDANTS("Total Customer"),@DESCENDANTS("MICHELIN NORTH AMERICA, INC.")))
SET CREATENONMISSINGBLK ON;
IDR = NA_Partner->NA_Misc->NA_Product * ("Sales Volume"->IDR->Marketing / ("Sales Volume"->IDR->Marketing->"All Products"->"Total Customer" - "Sales Volume"->IDR->Marketing->"All Products"->"MICHELIN NORTH AMERICA, INC." - "Sales Volume"->IDR->"21000"->"Reclaimed Rubber"->"C-0005-01"));
SET CREATENONMISSINGBLK OFF;
ENDFIX
and more aggregate commands similarly ....
----------------------------
Method to Increase Calculation speed:
1.When the intelligent calculation is turned off and CREATENONMISSINGBLK is ON, within the scope of the calculation script, all blocks are calculated, regardless if they are marked clean or dirty.
2.Cross dimension operators can be reduced, include more items inside FIX instead of using "->"
We have a big database and the cube has been in a 9.3.1 environment.
This is a 10 dimensional cube with 2 dense dimensions.
We have a script running in more than 2 days, which has commands :
SET CALCPARALLEL 4;
SET CALCTASKDIMS 3;
SET UPDATECALC OFF;
SET AGGMISSG ON;
SET FRMLBOTTOMUP ON;
FIX("21000", "22600", @REMOVE(@DESCENDANTS("All Products"), @DESCENDANTS("Reclaimed Rubber")), @REMOVE(@DESCENDANTS("Total Customer"),@DESCENDANTS("MICHELIN NORTH AMERICA, INC.")))
SET CREATENONMISSINGBLK ON;
IDR = NA_Partner->NA_Misc->NA_Product * ("Sales Volume"->IDR->Marketing / ("Sales Volume"->IDR->Marketing->"All Products"->"Total Customer" - "Sales Volume"->IDR->Marketing->"All Products"->"MICHELIN NORTH AMERICA, INC." - "Sales Volume"->IDR->"21000"->"Reclaimed Rubber"->"C-0005-01"));
SET CREATENONMISSINGBLK OFF;
ENDFIX
and more aggregate commands similarly ....
----------------------------
Method to Increase Calculation speed:
1.When the intelligent calculation is turned off and CREATENONMISSINGBLK is ON, within the scope of the calculation script, all blocks are calculated, regardless if they are marked clean or dirty.
2.Cross dimension operators can be reduced, include more items inside FIX instead of using "->"
Performance Management Architecture
Dimension Library—A centralized location to manage dimensions and dimension properties.
Application Library—A summary of applications that have been created and/or deployed to Financial Management, Planning, Profitability and Cost Management, Essbase Aggregate Storage Option (ASO), or Essbase Block Storage Option (BSO).
Calculation Manager— Enables you to create, validate, and deploy business rules and business rule sets.
Data Synchronization—Enables data synchronization between or within Hyperion applications.
Application Upgrade—Enables upgrades from previous Financial Management and Planning releases.
Library Job Console—Provides a summary, including status, of Dimension library and application activities, including imports, deployments, and data synchronizations.
To use Performance Management Architect for application administration, you can move applications being managed using Financial Management or Planning Classic administration. After you upgrade clasic application to PMA with workspace, you cannot move it back.
Application Library—A summary of applications that have been created and/or deployed to Financial Management, Planning, Profitability and Cost Management, Essbase Aggregate Storage Option (ASO), or Essbase Block Storage Option (BSO).
Calculation Manager— Enables you to create, validate, and deploy business rules and business rule sets.
Data Synchronization—Enables data synchronization between or within Hyperion applications.
Application Upgrade—Enables upgrades from previous Financial Management and Planning releases.
Library Job Console—Provides a summary, including status, of Dimension library and application activities, including imports, deployments, and data synchronizations.
To use Performance Management Architect for application administration, you can move applications being managed using Financial Management or Planning Classic administration. After you upgrade clasic application to PMA with workspace, you cannot move it back.
11/19/09
Essbase Integration Backup
To back up Integration Services:
1 Perform a complete backup of the Oracle Essbase Integration Services catalog repository.
2 Optional: Export all models and metaoutlines into XML files.
3 Create and save a list of all source Open Database Connectivity (ODBC) Data Source Names (DSNs) that were set up.
4 Keep a current copy of installed software, along with all property files such as ais.cfg.
To recover Integration Services:
. If Integration Services installation files are lost because of hardware failure, you must reinstall Integration Services.
. If the database containing the catalog is corrupted, you must restore it and then create an ODBC DSN to the catalog and use it to retrieve models and metaoutlines.
. If the backup catalog database is also corrupted, then, from Oracle Essbase Integration Services Console, create an empty catalog and import each model and metaoutline using XML files.
1 Perform a complete backup of the Oracle Essbase Integration Services catalog repository.
2 Optional: Export all models and metaoutlines into XML files.
3 Create and save a list of all source Open Database Connectivity (ODBC) Data Source Names (DSNs) that were set up.
4 Keep a current copy of installed software, along with all property files such as ais.cfg.
To recover Integration Services:
. If Integration Services installation files are lost because of hardware failure, you must reinstall Integration Services.
. If the database containing the catalog is corrupted, you must restore it and then create an ODBC DSN to the catalog and use it to retrieve models and metaoutlines.
. If the backup catalog database is also corrupted, then, from Oracle Essbase Integration Services Console, create an empty catalog and import each model and metaoutline using XML files.
Special Notices for Essbase
Note: there are 2 ways for Essbase backup and recovery
Method 1:
backup = full essbase backup + transaction log backup
recovery = full essbase recovery + replay transaction log
Method 2:
backup = backup all Essbase system files
recovery=replace essbase system files with the backup files
You can backup files while Essbase is in read only mode
The Essbase should be stopped while recovery is processing in method 1 or 2.
---------------------------------------------------------------------------
Note: Essbase Outline change will NOT be logged!
Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive file to smaller size(<2 GB).
Note: Partition commands (for example, synchronization commands) are not logged and,
therefore, cannot be replayed. When recovering a database, you must replay logged
transactions and manually make the same partition changes in the correct chronological order.
When using partitioned databases or using the @XREF function in calculation scripts, you must selectively replay logged transactions in the correct chronological order between the source and target databases.
Note: for ASO Essbase,the only way is system file backup:
1 Stop the application.
2 Use the file system to copy the contents of the application directory (ARBORPATH/app/appname),excluding the temp directory
-----------------------------------------------------------------------------
Set Essbase in read only mode when backing up:
alter database begin archive
Set Essbase back to read/write after backing up:
alter database end archive
----------------------------------------------------------------------------
Use export command to export data file is also a simple option to keep text format data backup, level 0 data is ok
Method 1:
backup = full essbase backup + transaction log backup
recovery = full essbase recovery + replay transaction log
Method 2:
backup = backup all Essbase system files
recovery=replace essbase system files with the backup files
You can backup files while Essbase is in read only mode
The Essbase should be stopped while recovery is processing in method 1 or 2.
---------------------------------------------------------------------------
Note: Essbase Outline change will NOT be logged!
Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive file to smaller size(<2 GB).
Note: Partition commands (for example, synchronization commands) are not logged and,
therefore, cannot be replayed. When recovering a database, you must replay logged
transactions and manually make the same partition changes in the correct chronological order.
When using partitioned databases or using the @XREF function in calculation scripts, you must selectively replay logged transactions in the correct chronological order between the source and target databases.
Note: for ASO Essbase,the only way is system file backup:
1 Stop the application.
2 Use the file system to copy the contents of the application directory (ARBORPATH/app/appname),excluding the temp directory
-----------------------------------------------------------------------------
Set Essbase in read only mode when backing up:
alter database begin archive
Set Essbase back to read/write after backing up:
alter database end archive
----------------------------------------------------------------------------
Use export command to export data file is also a simple option to keep text format data backup, level 0 data is ok
11/18/09
Essbase backup and Recovery
Note: Essbase Outline change will NOT be logged!
Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive file to smaller size(<2 GB).
MaxL Sample:
alter database Sample.Basic force archive to file '/Hyperion/samplebasic.arc';
query archive_file 'C:/Hyperion/samplebasic.arc' get overview;
alter database appname.dbname force restore from file BACKUP-FILE;
To enable transaction log backup, in the essbase.cfg file:
TRANSACTIONLOGDATALOADARCHIVE SERVER_CLIENT
query database Sample.Basic list transactions;
query database Sample.Basic list transactions after '11_20_2007:12:20:00' write to file '/Hyperion/products/Essbase/EssbaseServer/app/Sample/Basic/listoutput.csv';
alter database Sample.Basic replay transactions using sequence_id_range 2 to 2;
Note: when reply with transaction,please notice the log type.
You should clear the log file and the replay file from time to time.
/Hyperion/trlog/Sample/Basic
ARBORPATH/app/appname/dbname/Replay
-------------------------------------------------------------------------------
BSO Essbase: automated Essbase backup and restore is preferred
ASO Essbase: manual backup and restore is the only choice
Full backup and transaction log backup, after restoring from a full backed-up, you can replay the logged transactions that took place after the backup operation. However, outline changes are not logged and, therefore, cannot be replayed. Therefore, everytime there is an outline change, you must make an backup to avoid having the outline out of sync.
In backing up a database, Essbase performs the following tasks:
1. Places the database in read-only mode, protecting the database from updates during the archive process while allowing requests to query the database.
2. Writes a copy of the database files to an archive file that resides on the Essbase Server computer. The files include:
essxxxxx.pag -- Essbase data files
essxxxxx.ind -- Essbase index files
dbname.esm -- Essbase Kernel file
dbname.tct -- Transaction control table
dbname.ind -- Free fragment file
dbname.otl -- Outline file
dbname.otl.keep -- Temporary backup of dbname.otl
essx.lro -- Linked reporting objects
dbname.otn -- Temporary outline file
dbname.db -- Database file containing database settings
dbname.ddb -- Partition definition file
dbname.ocl -- Outline change log created during incremental dimension build.
essxxxx.chg -- Outline synchronization change log
dbname.alg -- Spreadsheet update log that stores spreadsheet update transactions
dbname.atx -- Spreadsheet update log that contains historical transaction information
essbase.sec* -- Essbase security file
essbase.bak -- Backup of the Essbase security file
essbase.cfg -- Essbase Server configuration file
dbname.app -- Application file containing application settings
.otl,.csc,.rul,.rep,.eqd,.sel
ESSCMD or MaxL scripts
3. Returns the database to read-write mode
-----------------------------------------------------------------------
Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive file to smaller size(<2 GB).
MaxL Sample:
alter database Sample.Basic force archive to file '/Hyperion/samplebasic.arc';
query archive_file 'C:/Hyperion/samplebasic.arc' get overview;
alter database appname.dbname force restore from file BACKUP-FILE;
To enable transaction log backup, in the essbase.cfg file:
TRANSACTIONLOGDATALOADARCHIVE SERVER_CLIENT
query database Sample.Basic list transactions;
query database Sample.Basic list transactions after '11_20_2007:12:20:00' write to file '/Hyperion/products/Essbase/EssbaseServer/app/Sample/Basic/listoutput.csv';
alter database Sample.Basic replay transactions using sequence_id_range 2 to 2;
Note: when reply with transaction,please notice the log type.
You should clear the log file and the replay file from time to time.
/Hyperion/trlog/Sample/Basic
ARBORPATH/app/appname/dbname/Replay
-------------------------------------------------------------------------------
BSO Essbase: automated Essbase backup and restore is preferred
ASO Essbase: manual backup and restore is the only choice
Full backup and transaction log backup, after restoring from a full backed-up, you can replay the logged transactions that took place after the backup operation. However, outline changes are not logged and, therefore, cannot be replayed. Therefore, everytime there is an outline change, you must make an backup to avoid having the outline out of sync.
In backing up a database, Essbase performs the following tasks:
1. Places the database in read-only mode, protecting the database from updates during the archive process while allowing requests to query the database.
2. Writes a copy of the database files to an archive file that resides on the Essbase Server computer. The files include:
essxxxxx.pag -- Essbase data files
essxxxxx.ind -- Essbase index files
dbname.esm -- Essbase Kernel file
dbname.tct -- Transaction control table
dbname.ind -- Free fragment file
dbname.otl -- Outline file
dbname.otl.keep -- Temporary backup of dbname.otl
essx.lro -- Linked reporting objects
dbname.otn -- Temporary outline file
dbname.db -- Database file containing database settings
dbname.ddb -- Partition definition file
dbname.ocl -- Outline change log created during incremental dimension build.
essxxxx.chg -- Outline synchronization change log
dbname.alg -- Spreadsheet update log that stores spreadsheet update transactions
dbname.atx -- Spreadsheet update log that contains historical transaction information
essbase.sec* -- Essbase security file
essbase.bak -- Backup of the Essbase security file
essbase.cfg -- Essbase Server configuration file
dbname.app -- Application file containing application settings
.otl,.csc,.rul,.rep,.eqd,.sel
ESSCMD or MaxL scripts
3. Returns the database to read-write mode
-----------------------------------------------------------------------
11/17/09
Cold backup - LDAP & Shared Service
1 Stop OpenLDAP and Shared Services.
2 Back up the Shared Services directory from the file system.Shared Services files are in HYPERION_HOME/deployments and HYPERION_HOME/products/Foundation.
3 Optional:
* Windows—Back up these Windows registry entries using REGEDIT and export:
HKLM/SOFTWARE/OPENLDAP
HKLM/SOFTWARE/Hyperion Solutions
* UNIX—Back up these items:
.hyperion.* files in the home directory of the user name used for configuring the
product user profile (.profile or equivalent) file for the user name used for configuring the product
4 Shut down the Shared Services relational database and perform a cold backup using RDBMS tools.
To recover Shared Services from a cold backup:
1 Restore the OS.
2 Using Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition, install Shared Services binaries. Note: Do not configure the installation.
OpenLDAP Services is created during installation.
3 Restore the Shared Services cold backup directory from the file system.
4 Restore the cold backup of the Shared Services relational database using database tools.
5 Optional: Restore the Windows registry entries from the cold backup.
6 (Windows) If Shared Services Web application service must be recreated, run HYPERION_HOME/deployments/AppServer/bin/installServiceSharedServices9.bat.
7 Start the OpenLDAP service and Oracle's Hyperion Shared Services.
2 Back up the Shared Services directory from the file system.Shared Services files are in HYPERION_HOME/deployments and HYPERION_HOME/products/Foundation.
3 Optional:
* Windows—Back up these Windows registry entries using REGEDIT and export:
HKLM/SOFTWARE/OPENLDAP
HKLM/SOFTWARE/Hyperion Solutions
* UNIX—Back up these items:
.hyperion.* files in the home directory of the user name used for configuring the
product user profile (.profile or equivalent) file for the user name used for configuring the product
4 Shut down the Shared Services relational database and perform a cold backup using RDBMS tools.
To recover Shared Services from a cold backup:
1 Restore the OS.
2 Using Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition, install Shared Services binaries. Note: Do not configure the installation.
OpenLDAP Services is created during installation.
3 Restore the Shared Services cold backup directory from the file system.
4 Restore the cold backup of the Shared Services relational database using database tools.
5 Optional: Restore the Windows registry entries from the cold backup.
6 (Windows) If Shared Services Web application service must be recreated, run HYPERION_HOME/deployments/AppServer/bin/installServiceSharedServices9.bat.
7 Start the OpenLDAP service and Oracle's Hyperion Shared Services.
11/16/09
Hot backup - LDAP & Shared Service
Steps:
1 Back up any related components, including Shared Services relational database and the OpenLDAP database. Note: The Shared Services relational database and the OpenLDAP database must be backed up at the same time. Ensure that the administrator does not register a product application or create an application group at backup time.
2. Run this command to create a hot backup of OpenLDAP:
Windows:
c:/Hyperion/products/Foundation/server/scripts/backup.bat HSS_backup
UNIX:
/home/username/Hyperion/products/Foundation/server/scripts/backup.sh /home/username/backups/HSS_backup
To recover Shared Services from a hot backup:
1 Stop OpenLDAP and Shared Services.
2 Recover the Shared Services relational database with RDBMS tools, using the backup with the same date as the OpenLDAP backup.
3 If you use OpenLDAP as Native Directory, recover the OpenLDAP database by running: Examples:
Windows noncatastrophic recovery—C:/Hyperion/products/Foundation/server/scripts/recover.bat c:/HSS_backup
UNIX catastrophic recovery—/home/username/Hyperion/products/Foundation/server/scripts/recover.sh /home/username/HSS_backup catRecovery
----------------------------------
Note: Physical backup and logical backup. A physical backup can be hot or cold:
*. Hot backup—Users can make changes to the database during a hot backup. Log files of changes made during the backup are saved, and the logged changes are applied to synchronize the database and the backup copy. A hot backup is used when a full backup is needed and the service level does not allow system downtime for a cold backup.
*. Cold backup—Users cannot make changes to the database during a cold backup, so the database and the backup copy are always synchronized. Cold backup is used only when the service level allows for the required system downtime.
Note: A cold full physical backup is recommended.
* Full—Creates a copy of data that can include parts of a database such as the control file,transaction files (redo logs), archive files, and data files. This backup type protects data from application error and safeguards against unexpected loss by providing a way to restore original data. Perform this backup weekly, or biweekly, depending on how often your data changes. Making full backups cold, so that users cannot make changes during the backups, is recommended.
Note: The database must be in archive log mode for a full physical backup.
* Incremental—Captures only changes made after the last full physical backup. The files differ for databases, but the principle is that only transaction log files created since the last backup are archived. Incremental backup can be done hot, while the database is in use, but it slows database performance.
In addition to backups, consider the use of clustering or log shipping to secure database content.
Logical Backup
A logical backup copies data, but not physical files, from one location to another. A logical backup is used for moving or archiving a database, tables, or schemas and for verifying the structures in a database.
1 Back up any related components, including Shared Services relational database and the OpenLDAP database. Note: The Shared Services relational database and the OpenLDAP database must be backed up at the same time. Ensure that the administrator does not register a product application or create an application group at backup time.
2. Run this command to create a hot backup of OpenLDAP:
Windows:
c:/Hyperion/products/Foundation/server/scripts/backup.bat HSS_backup
UNIX:
/home/username/Hyperion/products/Foundation/server/scripts/backup.sh /home/username/backups/HSS_backup
To recover Shared Services from a hot backup:
1 Stop OpenLDAP and Shared Services.
2 Recover the Shared Services relational database with RDBMS tools, using the backup with the same date as the OpenLDAP backup.
3 If you use OpenLDAP as Native Directory, recover the OpenLDAP database by running: Examples:
Windows noncatastrophic recovery—C:/Hyperion/products/Foundation/server/scripts/recover.bat c:/HSS_backup
UNIX catastrophic recovery—/home/username/Hyperion/products/Foundation/server/scripts/recover.sh /home/username/HSS_backup catRecovery
----------------------------------
Note: Physical backup and logical backup. A physical backup can be hot or cold:
*. Hot backup—Users can make changes to the database during a hot backup. Log files of changes made during the backup are saved, and the logged changes are applied to synchronize the database and the backup copy. A hot backup is used when a full backup is needed and the service level does not allow system downtime for a cold backup.
*. Cold backup—Users cannot make changes to the database during a cold backup, so the database and the backup copy are always synchronized. Cold backup is used only when the service level allows for the required system downtime.
Note: A cold full physical backup is recommended.
* Full—Creates a copy of data that can include parts of a database such as the control file,transaction files (redo logs), archive files, and data files. This backup type protects data from application error and safeguards against unexpected loss by providing a way to restore original data. Perform this backup weekly, or biweekly, depending on how often your data changes. Making full backups cold, so that users cannot make changes during the backups, is recommended.
Note: The database must be in archive log mode for a full physical backup.
* Incremental—Captures only changes made after the last full physical backup. The files differ for databases, but the principle is that only transaction log files created since the last backup are archived. Incremental backup can be done hot, while the database is in use, but it slows database performance.
In addition to backups, consider the use of clustering or log shipping to secure database content.
Logical Backup
A logical backup copies data, but not physical files, from one location to another. A logical backup is used for moving or archiving a database, tables, or schemas and for verifying the structures in a database.
11/15/09
Recovery from Spreadsheet Log
The Essbase Spreadsheet addin can update Essbase in data cell level. In case of Essbase disaster, the essbase can be restored from the last backup. Suppose the last backup is yesterday night, and the Essbase disaster happens in lunch time today. The data this morning is not restored by default. How to restore all of the detail data include the data one second before the Essbase disaster? Here is the method:
1. Set SSAUDIT or SSAUDITR in essbase.cfg file
Sample code:
SSAUDITR Sample Basic C:\logfoldername
The above statement will set the spreadsheet log on Sample application, and basic database. If logfoldername is not specified, the default log folder is used.
SSAUDIT Sample
The above statement will set the spreadsheet log on Sample application for all sub databases.
SSAudit xxxxx xxxxx c:\sslog
The above statement will set the spreadsheet log on all application for all sub databases.
2. After adding the above code to the essbase.cfg, restart Essbase Server. You will see the next words in the C:\Hyperion\logs\essbase\app\Sample\Sample.log
[Sun Nov 15 22:22:50 2009]Local/Sample///Info(1002088)
Starting Spreadsheet Log [C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.alg] For Database [Basic]
This means the setting is successful.
3. You can go on to make updating in Sample.Basic using Excel Spreadsheet addin, everything is logged now.
4. Now, suppose your Essbase is just restored from a backup, further more, you can recover transactions from the update log. To do so, use the Essbase command-line facility, ESSCMD, from the server console. The following ESSCMD command sequence loads the update log:
LOGIN hostnode username password
SELECT appname dbname //Example: Select Sample Basic
LOADDATA 3 filepath:appname.ATX
//LOADDATA 3 C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.atx
EXIT
5. The difference between SSAUDIT and SSAUDITR is:
SSAUDIT append logdata to existing logs after archiving. SSAUDITR clear the logs at the end of the archiving process.
6. Please note, you should mannully backup and clear the Basic.atx and Basic.alg log files, so that they will not become too huge.
1. Set SSAUDIT or SSAUDITR in essbase.cfg file
Sample code:
SSAUDITR Sample Basic C:\logfoldername
The above statement will set the spreadsheet log on Sample application, and basic database. If logfoldername is not specified, the default log folder is used.
SSAUDIT Sample
The above statement will set the spreadsheet log on Sample application for all sub databases.
SSAudit xxxxx xxxxx c:\sslog
The above statement will set the spreadsheet log on all application for all sub databases.
2. After adding the above code to the essbase.cfg, restart Essbase Server. You will see the next words in the C:\Hyperion\logs\essbase\app\Sample\Sample.log
[Sun Nov 15 22:22:50 2009]Local/Sample///Info(1002088)
Starting Spreadsheet Log [C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.alg] For Database [Basic]
This means the setting is successful.
3. You can go on to make updating in Sample.Basic using Excel Spreadsheet addin, everything is logged now.
4. Now, suppose your Essbase is just restored from a backup, further more, you can recover transactions from the update log. To do so, use the Essbase command-line facility, ESSCMD, from the server console. The following ESSCMD command sequence loads the update log:
LOGIN hostnode username password
SELECT appname dbname //Example: Select Sample Basic
LOADDATA 3 filepath:appname.ATX
//LOADDATA 3 C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.atx
EXIT
5. The difference between SSAUDIT and SSAUDITR is:
SSAUDIT append logdata to existing logs after archiving. SSAUDITR clear the logs at the end of the archiving process.
6. Please note, you should mannully backup and clear the Basic.atx and Basic.alg log files, so that they will not become too huge.
Select Essbase Compression method
1. If you have not too much information for the essbase, you don't need to make any compression setting on the Essbase. By default, the essbase is compressed using BITMAP which is the best way in most cases.
2. If your essbase is 90% dense, you may use ZLIB for the compression method
3. If your Essbase is sparse, and you have huge repeated nonmissing data cells, you should use RLE for essbase compression method.
By the way, the "Index Value Pair" compression is selected automatically by the Essbase system. Index Value Pair addresses compression on databases with larger block sizes, where the blocks are highly sparse. This compression algorithm is not selectable but is automatically used whenever appropriate by the database. The user must still choose between the compression types None, bitmap, RLE, and zlib through Administration Services.
2. If your essbase is 90% dense, you may use ZLIB for the compression method
3. If your Essbase is sparse, and you have huge repeated nonmissing data cells, you should use RLE for essbase compression method.
By the way, the "Index Value Pair" compression is selected automatically by the Essbase system. Index Value Pair addresses compression on databases with larger block sizes, where the blocks are highly sparse. This compression algorithm is not selectable but is automatically used whenever appropriate by the database. The user must still choose between the compression types None, bitmap, RLE, and zlib through Administration Services.
Simulated Calculation
You can simulate a calculation using SET MSG ONLY in a calculation script. A simulated calculation produces results that help you analyze the performance of a real calculation that is based on the same data and outline. By running a simulated calculation with a command such as SET NOTICE HIGH, you can mark the relative amount of time each sparse dimension takes to complete. Then, by performing a real calculation on one or more dimensions, you can estimate how long the full calculation will take, because the time a simulated calculation takes to run is proportional to the time that the actual calculation takes to run.
For example, if the calculation starts at 9:50:00 AM, and the first notice is time-stamped at 09:50:10 AM and the second is time-stamped at 09:50:20 AM, you know that each of part of the calculation took 10 seconds. If you then run a real calculation on only the first portion and note that it took 30 seconds to run, you know that the other portion also will take 30 seconds. If there were two messages total, then you would know that the real calculation will take approximately 60 seconds (20 / 10 * 30 = 60 seconds). Use the following topics to learn how to perform a simulated calculation and how to use a simulated calculation to estimate calculation time.
Performing a Simulated Calculation
Before you can estimate calculation time, you must perform a simulated calculation on a data model that is based on your actual database.
➤ To perform a simulated calculation:
1 Create a data model that uses all dimensions and all levels of detail about which you want information.
2 Load all data. This procedure calculates only data loaded in the database.
3 Create a calculation script with these entries:
SET MSG ONLY;
SET NOTICE HIGH;
CALC ALL;
If you are using dynamic calculations on dense dimensions, substitute the CALC ALL command with the specific dimensions that you need to calculate; for example, CALC DIM EAST.
Note:
If you try to validate the script, Essbase reports an error. Disregard the error.
4 Run the script.
5 Find the first sparse calculation message in the application log and note the time in the message.
6 Note the time for each subsequent message.
7 Calculate the dense dimensions of the model that are not being dynamically calculated:
CALC DIM (DENSE_DIM1, DENSE_DIM2, …);
8 Calculate the sparse dimensions of the model:
CALC DIM (SPARSEDIM1, SPARSEDIM2, …);
9 Project the intervals at which notices will occur, and then verify against sparse calculation results. You can
then estimate calculation time.
For example, if the calculation starts at 9:50:00 AM, and the first notice is time-stamped at 09:50:10 AM and the second is time-stamped at 09:50:20 AM, you know that each of part of the calculation took 10 seconds. If you then run a real calculation on only the first portion and note that it took 30 seconds to run, you know that the other portion also will take 30 seconds. If there were two messages total, then you would know that the real calculation will take approximately 60 seconds (20 / 10 * 30 = 60 seconds). Use the following topics to learn how to perform a simulated calculation and how to use a simulated calculation to estimate calculation time.
Performing a Simulated Calculation
Before you can estimate calculation time, you must perform a simulated calculation on a data model that is based on your actual database.
➤ To perform a simulated calculation:
1 Create a data model that uses all dimensions and all levels of detail about which you want information.
2 Load all data. This procedure calculates only data loaded in the database.
3 Create a calculation script with these entries:
SET MSG ONLY;
SET NOTICE HIGH;
CALC ALL;
If you are using dynamic calculations on dense dimensions, substitute the CALC ALL command with the specific dimensions that you need to calculate; for example, CALC DIM EAST.
Note:
If you try to validate the script, Essbase reports an error. Disregard the error.
4 Run the script.
5 Find the first sparse calculation message in the application log and note the time in the message.
6 Note the time for each subsequent message.
7 Calculate the dense dimensions of the model that are not being dynamically calculated:
CALC DIM (DENSE_DIM1, DENSE_DIM2, …);
8 Calculate the sparse dimensions of the model:
CALC DIM (SPARSEDIM1, SPARSEDIM2, …);
9 Project the intervals at which notices will occur, and then verify against sparse calculation results. You can
then estimate calculation time.
Incremental Data loading
Many companies load data incrementally. For example, a company may load data each month for that month. To optimize calculation performance when you load data incrementally, make the dimension tagged as TIME a SPARSE dimension. If the time dimension is sparse, the database contains a datablock for each time period. When you load data by time period, Essbase accesses fewer data blocks because fewer blocks contain the relevant time period. Thus, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated. For example, if you
load data for March, only the data blocks for March and the dependent parents of March are updated. However, making the time dimension sparse when it is naturally dense may significantly increase the size of the index, creating possibly slower performance due to more physical I/O activity to accommodate the large index.
If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and Intelligent Calculation is enabled.
load data for March, only the data blocks for March and the dependent parents of March are updated. However, making the time dimension sparse when it is naturally dense may significantly increase the size of the index, creating possibly slower performance due to more physical I/O activity to accommodate the large index.
If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and Intelligent Calculation is enabled.
The principle for design Essbase
Point 1:
Don't use too deep levels, it's ok to have a lot of members in one level. But it is not wise if we have deep nested level, and each level doesn't have too much members. This is very important principle when we design a cube.
The outline secuqence: time, account, the other dense dimensions, the sparse dimension that has fewest members, other sparse dimensions, the attribute dimensions.
Point 2:
Calculation performance may be affected if a database outline has multiple flat dimensions. A flat dimension has very few parents, and each parent has many thousands of children; in other words, flat dimensions have many members and few levels. You can improve performance for outlines with multiple flat dimensions by adding intermediate levels to the database outline.
The above 2 points are from different source, they looks some different and even in the opporsite. my understanding is: we should have fewer levels anyway, but the huge amount of member should happen in the parent level. That means we have thousands of parents, but each parent has few members.
Don't use too deep levels, it's ok to have a lot of members in one level. But it is not wise if we have deep nested level, and each level doesn't have too much members. This is very important principle when we design a cube.
The outline secuqence: time, account, the other dense dimensions, the sparse dimension that has fewest members, other sparse dimensions, the attribute dimensions.
Point 2:
Calculation performance may be affected if a database outline has multiple flat dimensions. A flat dimension has very few parents, and each parent has many thousands of children; in other words, flat dimensions have many members and few levels. You can improve performance for outlines with multiple flat dimensions by adding intermediate levels to the database outline.
The above 2 points are from different source, they looks some different and even in the opporsite. my understanding is: we should have fewer levels anyway, but the huge amount of member should happen in the parent level. That means we have thousands of parents, but each parent has few members.
11/14/09
Validata Essbase Structure
Using VALIDATE to Check Integrity
The VALIDATE command performs many structural and data integrity checks:
•Verifies the structural integrity of free space information in the index.
•Compares the data block key in the index page with the data block key in the corresponding data block.
•The Essbase index contains an entry for every data block. For every read operation, VALIDATE automatically compares the index key in the index page with the index key in the corresponding data block and checks other header information in the block. If it encounters a mismatch, VALIDATE displays an error message and continues processing until it checks the entire database.
•Restructures data blocks whose restructure was deferred with incremental restructuring.
•Checks every block in the database to make sure each value is a valid floating point number.
•Verifies the structural integrity of the LROs catalog.
Note:
When you issue the VALIDATE command, we recommend placing the database in read-only mode.
As Essbase encounters mismatches, it records error messages in the VALIDATE error log. You can specify a file name for error logging; Essbase prompts you for this information if you do not provide it. The VALIDATE utility runs until it has checked the entire database.
You can use the VALIDATE command in ESSCMD to perform these structural integrity checks.
During index free space validation, the VALIDATE command verifies the structural integrity of free space information in the index. If integrity errors exist, Essbase records them in the VALIDATE log. The file that you specified on the VALIDATE command holds the error log.
If VALIDATE detects integrity errors regarding the index free space information, the database must be rebuilt. You can rebuild in three ways:
•Restore the database from a recent system backup
•Restore the data by exporting data from the database; creating an empty database; and loading the exported data into the new database.
•Restructure the database
The VALIDATE command performs many structural and data integrity checks:
•Verifies the structural integrity of free space information in the index.
•Compares the data block key in the index page with the data block key in the corresponding data block.
•The Essbase index contains an entry for every data block. For every read operation, VALIDATE automatically compares the index key in the index page with the index key in the corresponding data block and checks other header information in the block. If it encounters a mismatch, VALIDATE displays an error message and continues processing until it checks the entire database.
•Restructures data blocks whose restructure was deferred with incremental restructuring.
•Checks every block in the database to make sure each value is a valid floating point number.
•Verifies the structural integrity of the LROs catalog.
Note:
When you issue the VALIDATE command, we recommend placing the database in read-only mode.
As Essbase encounters mismatches, it records error messages in the VALIDATE error log. You can specify a file name for error logging; Essbase prompts you for this information if you do not provide it. The VALIDATE utility runs until it has checked the entire database.
You can use the VALIDATE command in ESSCMD to perform these structural integrity checks.
During index free space validation, the VALIDATE command verifies the structural integrity of free space information in the index. If integrity errors exist, Essbase records them in the VALIDATE log. The file that you specified on the VALIDATE command holds the error log.
If VALIDATE detects integrity errors regarding the index free space information, the database must be rebuilt. You can rebuild in three ways:
•Restore the database from a recent system backup
•Restore the data by exporting data from the database; creating an empty database; and loading the exported data into the new database.
•Restructure the database
Essbase Restructure
There are 3 restructure: Dense restructure, sparse restructure, and outline only restreucture. If a member of dense dimension is changed, thr resturcture command will make a dense restructure. Dense restructure use a long time, becuase some data blocks will be created. Sparse restructure happens only if a sparse member is changed, sparse restructure only resturcture index, it should not use too much time. Outline only restucture don't change data block or index, no data block or index restructure happen, it uses no time.
•Dense restructure: If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in the data files and creates new data files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense restructure you must recalculate the database. Dense restructuring, the most time-consuming of the restructures, can take a long time to complete for large databases.
•Sparse restructure: If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index and creates new index files. Restructuring the index is relatively fast; the time required depends on the index size.
•Outline-only restructure: If a change affects only the database outline, Essbase does not restructure the index or data files. Member name changes, creation of aliases, and dynamic calculation formula changes are examples of changes that affect only the database outline.
•Dense restructure: If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in the data files and creates new data files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense restructure you must recalculate the database. Dense restructuring, the most time-consuming of the restructures, can take a long time to complete for large databases.
•Sparse restructure: If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index and creates new index files. Restructuring the index is relatively fast; the time required depends on the index size.
•Outline-only restructure: If a change affects only the database outline, Essbase does not restructure the index or data files. Member name changes, creation of aliases, and dynamic calculation formula changes are examples of changes that affect only the database outline.
Essbase Committed Setting
Under uncommitted access, Essbase locks blocks for write access until Essbase finishes updating the block. Under committed access, Essbase holds locks until a transaction completes.With uncommitted access, blocks are released more frequently than with committed access. The essbase performace is better if we set uncommitted access. Besides, parallel calculation only works with uncommitted access.
---------------
Database performance:
Uncommitted access always yields better database performance than committed access. When using uncommitted access, Essbase does not create locks that are held for the duration of a transaction but commits data based on short-term write locks.
Data consistency: Committed access provides a higher level of data consistency than uncommitted access. Retrievals from a database are more consistent. Also, only one transaction at a time can update data blocks when the isolation level is set to committed access. This factor is important in databases where multiple transactions attempt to update the database simultaneously.
Data concurrency:
Uncommitted access provides better data concurrency than committed access. Blocks are released more frequently than during committed access. With committed access, deadlocks can occur.
Database rollbacks:
If a server crash or other server interruption occurs during active transactions, the Essbase kernel rolls back the transactions when the server is restarted. With committed access, rollbacks return the database to its state before transactions began. With uncommitted access, rollbacks may result in some data being committed and some data not being committed.
---------------
Database performance:
Uncommitted access always yields better database performance than committed access. When using uncommitted access, Essbase does not create locks that are held for the duration of a transaction but commits data based on short-term write locks.
Data consistency: Committed access provides a higher level of data consistency than uncommitted access. Retrievals from a database are more consistent. Also, only one transaction at a time can update data blocks when the isolation level is set to committed access. This factor is important in databases where multiple transactions attempt to update the database simultaneously.
Data concurrency:
Uncommitted access provides better data concurrency than committed access. Blocks are released more frequently than during committed access. With committed access, deadlocks can occur.
Database rollbacks:
If a server crash or other server interruption occurs during active transactions, the Essbase kernel rolls back the transactions when the server is restarted. With committed access, rollbacks return the database to its state before transactions began. With uncommitted access, rollbacks may result in some data being committed and some data not being committed.
11/12/09
Data Block - Calculation Performance
1.Use FIX instead of cross-dimensional operator
Compare the next 2 statements:
Fix(Jan)
Sales = Sales * 1.05;
EndFIX
Sales(Sales -> Jan = Sales -> Jan * 1.05);
The 2nd is not efficient, it will look through all of time dimension even if only the Jan is calculated. The 1st one only calculate the Jan for sales block which is more efficient.
2. The data block size setting
It should be 10k - 100k, if the data blick size is too big(>100k), the intelligent calculation will not work well. If the data block size is too small(nearby 10k), the index may become too huge, and this will affect the calculation speed.
Compare the next 2 statements:
Fix(Jan)
Sales = Sales * 1.05;
EndFIX
Sales(Sales -> Jan = Sales -> Jan * 1.05);
The 2nd is not efficient, it will look through all of time dimension even if only the Jan is calculated. The 1st one only calculate the Jan for sales block which is more efficient.
2. The data block size setting
It should be 10k - 100k, if the data blick size is too big(>100k), the intelligent calculation will not work well. If the data block size is too small(nearby 10k), the index may become too huge, and this will affect the calculation speed.
Time Dim - Calculation Performance
By default, the time dimension is set to be dense. But if you use incremental data loading in MaxL srcipt. And the data is loaded in the end of every month. You can set Time dimension as sparse dimension, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated.This will significantly increase the data loading performance.
--------
Incremental Data Loading
Many companies load data incrementally. For example, a company may load data each month for that month. To optimize calculation performance when you load data incrementally, make the dimension tagged as time a sparse dimension. If the time dimension is sparse, the database contains a data block for each time period. When you load data by time period, Essbase accesses fewer data blocks because fewer blocks contain the relevant time period. Thus, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated. For example, if you
load data for March, only the data blocks for March and the dependent parents of March are updated.
However, making the time dimension sparse when it is naturally dense may significantly increase the size of the index, creating possibly slower performance due to more physical I/O activity to accommodate the large index.
If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and Intelligent Calculation is enabled.
Note: This method works only for ASO
--------
Incremental Data Loading
Many companies load data incrementally. For example, a company may load data each month for that month. To optimize calculation performance when you load data incrementally, make the dimension tagged as time a sparse dimension. If the time dimension is sparse, the database contains a data block for each time period. When you load data by time period, Essbase accesses fewer data blocks because fewer blocks contain the relevant time period. Thus, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated. For example, if you
load data for March, only the data blocks for March and the dependent parents of March are updated.
However, making the time dimension sparse when it is naturally dense may significantly increase the size of the index, creating possibly slower performance due to more physical I/O activity to accommodate the large index.
If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and Intelligent Calculation is enabled.
Note: This method works only for ASO
11/11/09
Intelligent Calculation Performance Tunning
1. Switch on the Intelligent Calculation. It seems if we switch off Intelligent
Calculation, the calculation result will be correct. Because all of the data blocks no matter if it is marked as dirty or clean will be recalculated. But the price is too many data blocks will be included in calculation.
2. We can switch on and off the intelligent calculation inside Calculation Script in different cases so that only the necessary data blocks are recalculated. Use command: Set CLEARUPDATASTATUS ONLY/AFTER/OFF, SET UPDATECALC ON/OFF
Sample :
SET UPDATECALC OFF;
SET CLEARUPDATESTATUS AFTER;
CALC TWOPASS;
Calculation, the calculation result will be correct. Because all of the data blocks no matter if it is marked as dirty or clean will be recalculated. But the price is too many data blocks will be included in calculation.
2. We can switch on and off the intelligent calculation inside Calculation Script in different cases so that only the necessary data blocks are recalculated. Use command: Set CLEARUPDATASTATUS ONLY/AFTER/OFF, SET UPDATECALC ON/OFF
Sample :
SET UPDATECALC OFF;
SET CLEARUPDATESTATUS AFTER;
CALC TWOPASS;
Parallel Calculation and Tuning
1. We can enable parallel calculation in Essbase.cfg file in system level, or enable in calculation sript level. Sample code:
SET CALCPARALLEL
SET CALCTASKDIMS 2
2. Parallel calculation only works with Uncomitted Access
3. There is a risk that the parallel calculation may freeze the computer.
4. Use FIX command so that special data block is calculated, don't use cross dimension operator in most cases.
SET CALCPARALLEL
SET CALCTASKDIMS 2
2. Parallel calculation only works with Uncomitted Access
3. There is a risk that the parallel calculation may freeze the computer.
4. Use FIX command so that special data block is calculated, don't use cross dimension operator in most cases.
Essbase Calculation Performance Tunning
1. After we enabled Parallel Calculation, by default,Essbase uses the last sparse dimension in an outline to identify tasks that can be performed concurrently. But the distribution of data may cause one or more tasks to be empty; that is, there are no blocks to be calculated in the part of the database identified by a task. This situation can lead to uneven load balancing, reducing parallel calculation effectiveness.
2. To resolve this situation, you can enable Essbase to use additional sparse dimensions in the identification of tasks for parallel calculation. For example, if you have a FIX statement on a member of the last sparse dimension, you can include the next-to-last sparse dimension from the outline as well. Because each unique member combination of these two dimensions is identified as a potential task, more and smaller tasks are created, increasing the opportunities for parallel processing and improving load balancing.
3. Add or modify CALCTASKDIMS in the essbase.cfg file on the server, or use the calculation script command SET CALCTASKDIMS at the top of the script.
Sample Code: SET CALCTASKDIMS 2
This will enable last 2 sparse dimension to be included in the checking, it may significantly increase the running performance.(416-3025810)
2. To resolve this situation, you can enable Essbase to use additional sparse dimensions in the identification of tasks for parallel calculation. For example, if you have a FIX statement on a member of the last sparse dimension, you can include the next-to-last sparse dimension from the outline as well. Because each unique member combination of these two dimensions is identified as a potential task, more and smaller tasks are created, increasing the opportunities for parallel processing and improving load balancing.
3. Add or modify CALCTASKDIMS in the essbase.cfg file on the server, or use the calculation script command SET CALCTASKDIMS at the top of the script.
Sample Code: SET CALCTASKDIMS 2
This will enable last 2 sparse dimension to be included in the checking, it may significantly increase the running performance.(416-3025810)
Subscribe to:
Posts (Atom)