HANA NSE (Native Storage Extension)

1. Introduction

SAP HANA Native Storage Extension (NSE) in a built-in method of SAP HANA, which can be activated and used to offload data to the filesystem, without the need to load it into memory.

The database size can be increased without the need to scale-up or scale-out the database.

All data in the database has a Load Unit property, which defines how data is accessed, loaded and processed by the HANA database.

The data in NSE is “page-loadable”, which means that it is loaded on request by the HANA resource manager into the NSE buffer cache.

The buffer cache manages its own memory pools allocating memory on demand, releasing unused memory and applying smart caching algorithms to avoid redundant I/O operations. A good sizing is important. Too small can lead to Out-Of-Buffer (OOB) events and performance issues. Too large is a waste of memory.

The NSE Advisor is a tool that provides recommendations about load units for tables, partitions or columns. It helps to decide which data should be placed in NSE being page loadable. The required view is “M_CS_NSE_ADVISOR”, which results in a list of tables, partitions and columns with suggestions on whether they should be page or column loadable.

1.1 Reasons

The main reasons for implementing HANA NSE, are:

  • Reduce HANA memory footprint
  • Reduce HANA License Costs
  • Reduce Hardware Costs (no need to Scale-Up or Scale-Out)

1.2 Restrictions

Currently, it is not possible to change the partitioning definition as well as the load unit of a table at once.
First, the table needs to be partitioned using the heterogeneous partitioning definition, afterwards, the load unit can be changed for the whole table or individual partitions.

2. Load Units

There are two types of load units:

  • Column-loadable: columns are fully loaded into memory when they are used.
  • Page-loadable: columns are not fully loaded into memory, but only the pages containing parts of the column that are relevant for the query.

The load unit can be defined with CREATE or ALTER statements for tables, partitions or columns.

When not specified, it contains the value “DEFAULT”, which is “column loadable”, but it takes over the value from its parent: Table -> Partition -> Column

Example:

TypeSpecified Load UnitEffective Load Unit
TableDEFAULTCOLUMN
PartitionPage LoadablePage Loadable
ColumnDEFAULTPage Loadable

3. Identify WARM data

3.1 Static Approach

Based on these characteristics and some knowledge about your applications you can apply a static approach to identify warm data:

  • Less-frequently accessed data in your application(s) is e.g. application log-data or statistics data. Their records are often written once and rarely read afterwards.
  • Less-frequently accessed data is also historical data in the application. Some application tables have e.g. a date/time column that provides an indicator to “split” a table in current and older (historical) parts. This table split can be implemented by a table range-partitioning in the HANA database, where the partition key is e.g., the data/time column in your table.

Without too much knowledge about the applications running on the database, you may also monitor the table access statistics in HANA (m_table_statistics or in the SAP HANA Cockpit) to identify rarely accessed tables.

3.2 Dynamic Approach

This can be achieved with the NSE Advisor.

Since S/4HANA 2021 there are over 150 objects delivered during the EU_IMPORT phase from SAP. The most common once are dictionary tables like DD03L, DOKTL, DYNPSOURCE, TADIR etc. This means every customer with S/4 2021+ will now use NSE. So, please take care of your configuration.

4. Implementation

The NSE implementation contains 3 phases:

  • Pre-checks
  • Plan
  • Execute

4.1 Pre-checks

Before starting, it is wise to perform some checks to define the current situation.

The following SQL commands can be executed in the HANA tenant on which the NSE will be activated:

  • Used memory
select to_decimal(sum(USED_MEMORY_SIZE)/1024/1024/1024,2,2) "Memory Gb" from M_SERVICE_COMPONENT_MEMORY;
  • Memory size for tables
select to_decimal(sum(MEMORY_SIZE_IN_TOTAL)/1024/1024/1024,2,2) as "MEM_SIZE_GB" from M_CS_TABLES;
  • Memory size per component
select COMPONENT, to_decimal(sum(USED_MEMORY_SIZE)/1024/1024/1024,2,2) "Memory Gb" from M_SERVICE_COMPONENT_MEMORY group by COMPONENT order by COMPONENT;
  • Column store size
select to_decimal(sum(PAGE_SIZE*USED_BLOCK_COUNT)/1024/1024/1024,2,2) as "CS Size GB" from M_DATA_VOLUME_PAGE_STATISTICS where not PAGE_SIZECLASS like '%-RowStore';

4.2 Plan

This phase consist of determining the objects candidates that can be offloaded. There are a few possible options which can/should be combined to come to a proper offload plan.

These are the possible options:

  • NSE Advisor
  • Technical and Dictionary Tables
  • Tables with a low read count
  • Partitions with low read count
  • Statistic Server Tables
  • Audit Log Tables
  • Partitioning
4.2.1 NSE Advisor

The NSE advisor should only be used either with the HANA Cockpit or the built-in procedure. A mix of both will lead to wrong results.

Use the advisor for representative workloads of your system.

To limit the CPU consumption of the NSE Advisor, set the following parameters:

  • min_row_count     Exclude objects with less than X rows (default: 10000)
  • min_object_size    Ignore objects smaller than X bytes (default: 1048576)
Note: NSE Advisor Recommendations are NOT persistent over a restart!
4.2.1.1 Enable NSE Advisor

Enable via the HANA Cockpit:

Go to the HANA database, select view “All”. In tile “Native Storage Extension”, click in the three dots and choose “Configure NSE Advisor”:


Set the “Enable NSE Advisor” to “ON” and change the “Minimum Object Size” to “1024” MB:

4.2.1.2 Retrieve NSE Advisor recommendations

Generally, the NSE Advisor recommends that you change the load unit of a table, partition, or a column to:

  • Page-loadable to reduce memory footprint without much performance impact.
  • Column-loadable to improve performance by keeping hot objects in memory.

The following views can be queried to determine the advices from the NSE Advisor:

M_CS_NSE_ADVISOR

An overview of the objects to be changed:

select * from M_CS_NSE_ADVISOR order by confidence desc;

Same as previous, but in more readable format and only selecting required columns:

select SCHEMA_NAME as SCHEMA, TABLE_NAME, PART_ID, COLUMN_NAME, GRANULARITY as CHANGE_ON, CONFIDENCE,to_decimal(MEMORY_SIZE_IN_MAIN/1024/1024/1024,2,2) as "SIZE_MEM_GB", to_decimal(MAIN_PHYSICAL_SIZE/1024/1024/1024,2,2) as "SIZE_PHYS_GB"from M_CS_NSE_ADVISORwhere not CONFIDENCE = '-1'order by CONFIDENCE desc;

M_CS_NSE_ADVISOR_DETAILS

An overview with more details of all the objects checked:

select * from M_CS_NSE_ADVISOR_DETAILS order by confidence desc;

Same as previous, but in more readable format and only selecting required columns:


select SCHEMA_NAME as SCHEMA, TABLE_NAME, PART_ID, COLUMN_NAME, GRANULARITY as CHANGE_ON,CONFIDENCE, CURRENT_LOAD_UNIT as CHANGE_FROM, TARGET_LOAD_UNIT as CHANGE_TO,to_decimal(MEMORY_SIZE_IN_MAIN/1024/1024/1024,2,2) as "SIZE_MEM_GB",to_decimal(MAIN_PHYSICAL_SIZE/1024/1024/1024,2,2) as "SIZE_PHYS_GB"from M_CS_NSE_ADVISOR_DETAILSwhere not CONFIDENCE = '-1'order by CONFIDENCE desc;

Explanation of the columns:

Column nameRenamed ColumnDescription
SCHEMA_NAMESCHEMADisplays the schema name.
TABLE_NAMEDisplays the table name.
COLUMN_NAMEDisplays the column name. When the recommendation level is not column, this field is not applicable and the value NULL is used.
PART_IDDisplays the table partition ID. When the recommendation level is not Partition, this field is not applicable, and the value is 0.
CURRENT_LOAD_UNITCHANGE_FROMDisplays the current load unit for this object: COLUMN/PAGE.
TARGET_LOAD_UNITCHANGE_TODisplays the recommended load unit for this object: COLUMN/PAGE.
GRANULARITYCHANGE_ONDisplays the object level at which the recommendation for this table is given: TABLE, PARTITION, or COLUMN.
MEMORY_SIZE_IN_MAINSIZE_MEM_GBDisplays the memory consumption size, in bytes, per granularity of the object in main memory. If not LOADED, the value is 0 is used.
MAIN_PHYSICAL_SIZESIZE_PHYS_GBDisplays the storage size, in bytes, per granularity of the object.
CONFIDENCEHow sure the NSE advisor is about performing the change.
Note: Columns $trexexternalkey$ or $AWKEY_REFDOC$GJAHR$ are internal columns (SAP Note 2799997 – Q15 “How to handle NSE advisor recommendations to move columns like $trexexternalkey$ or $AWKEY_REFDOC$GJAHR$?”) that refer to the primary key respectively a multi-column index/concat attribute. Indexes can be moved to NSE as well.

Identify index name:

select SCHEMA_NAME, TABLE_NAME, INDEX_NAME, INDEX_TYPE, CONSTRAINT
from INDEXES where TABLE_NAME = '<table_name>';

Change the index load granularity

ALTER INDEX "<schema>"."<index_name>" PAGE LOADABLE;
4.2.1.3 Disable the NSE Advisor

In the same screen as the enable function, now set the “Enable NSE Advisor” to “OFF”.

4.2.2 Technical and Dictionary Tables

Some technical and dictionary tables can be offloaded, as they are not read often and only inserts are being performed.

Examples: Change documents (CDPOS, CDHDR), Application Log (BALDAT), IDocs (EDID4), Table changes (DBTABLOG), Archiving (ZARIX*).

It comes down to the following tables:

  • CDPOS *
  • CDHDR *
  • BALDAT
  • EDID4
  • DBTABLOG
  • ZARIX%
  • %~OLD          (SAP Note 2198923)

Note: Be careful with CDPOS and CDHDR, as after changing these tables, the NSE advisor gives:

  • CDHDR should be changed back to COLUMN loadable.
  • CDPOS should be changed back to COLUMN loadable for columns OBJECTID and CHANGENR
4.2.2.1 Page Preferred versus Page Enforced

Starting SAP S/4HANA 2021, there are tables with ‘paged preferred’ load unit in ABAP Data Dictionary (DDIC):

Run report DD_REPAIR_TABT_DBPROPERTIES for all tables (‘check only’ option) to determine tables having a ABAP DDIC load unit ‘page preferred’ but have actual load unit ‘column’ in the HANA database.

For a quick glance on the load unit settings in DDIC, you may check field LOAD_UNIT in table DD09L:

‘P’= ‘page preferred’, ‘Q’ ‘page enforced’

The following query can be used to determine the table partitions of which the preferred load unit is ‘PAGE’, but the actual load unit is ‘COLUMN’:

select a.SCHEMA_NAME, a.TABNAME, a.LOAD_UNIT, b.LOAD_UNIT, to_decimal(sum(b.MEMORY_SIZE_IN_TOTAL/1024/1024/1024,2,2) "Memory G"from <SAP Schema>.DD09L a, M_CS_TABLES b where a.TABNAME = b.TABLE_NAME and b.LOAD_UNIT = 'COLUMN' and a.LOAD_UNIT = 'P' group by a.TABNAME, a.LOAD_UNIT, b.LOAD_UNIT order by sum(b.MEMORY_SIZE_IN_TOTAL/1024/1024/1024) desc;

For more information, see SAP Note 2973243.

Column Preferred or Page Preferred

  • Column Preferred is the default behavior.
  • The load unit is only applied to the database table (runtime object in SAP HANA) upon creation of the table. As a consequence, if the DDIC setting of an existing table that is initially “column loadable” is changed to “page preferred” in a new release of SAP S/4HANA, an upgrade to that release will not change the load unit setting of the database table – the table will remain “column loadable” in SAP HANA.
  • Changing from one preferred load unit to another does not change the load unit on the database.
  • The ABAP DDIC consistency check does not consider the load unit. It is accepted if the load unit on the database differs from the values specified in the DDIC.
  • During a table conversion, the actual NSE settings of the runtime object in the HANA database will be preserved. 

Column Enforced or Page Enforced

  • The load unit is specified upon creation of the table. Furthermore, changes to the load unit in the ABAP DDIC result in corresponding changes on the database.
  • Changing the enforced load unit results in a corresponding change on the database.
  • The ABAP DDIC consistency check takes the load unit into account. Different values for the load unit in the DDIC and on the database result in tables that are inconsistent from the DDIC point of view.
  • For the “Page Enforced” load unit setting, the technical restrictions apply as described in the context of S/4HANA 2020. 
  • For the “Enforced” flavor of load unit settings, the behavior during table conversion is as described in the context of S/4HANA 2020.

4.2.3 Statistics Server Tables

The ESS (embedded statistics server) tables are not part your business workload. They are used for the performance and workload data of the system. This means they are only used for read load when you have some issues and your DBA or SAP is analyzing the system.
These column tables of the SAP HANA statistics server are not using NSE as default. It is recommended to also activate NSE for the column tables of the SAP HANA statistics server.

Determine the current load unit:

select distinct LOAD_UNIT from SYS.TABLES where SCHEMA_NAME = '_SYS_STATISTICS' and IS_COLUMN_TABLE = 'TRUE' and IS_USER_DEFINED_TYPE = 'FALSE';

When the output value is ‘DEFAULT’, activate NSE for column tables of HANA statistics server using PAGE LOADABLE by running the following command with a user with sufficient permissions:

call _SYS_STATISTICS.SHARED_ALTER_PAGE_LOADABLE;

4.2.6 Audit Log Tables

Starting with HANA 2.0 Rev. 70 it is possible to page out the table “CS_AUDIT_LOG_”.

Check the current setting with the following command:

select LOAD_UNIT from SYS.TABLES where SCHEMA_NAME = '_SYS_AUDIT' and TABLE_NAME = 'CS_AUDIT_LOG_';

When the output value is ‘DEFAULT’, activate NSE for the audit log tables with the following command:

ALTER SYSTEM ALTER AUDIT LOG PAGE LOADABLE;

4.2.7 Partitioning

When NSE is used for a table (regardless of whether the full table is page loadable, or only selected columns or partitions), comparatively small partitions should be designed in order to optimize IO-intensive operations such as delta merges. Because of the focus on IO, the relevant measurable is not the number of records per partition, but rather the partition size in bytes. When using NSE with a table, partition sizes should be smaller than 10 GB. This also means that if the size of a table that is fully or partially (selected columns) in NSE reaches 10 GB, this table should be partitioned.

This should be analyzed and tested carefully and depends highly on the usage of the table by your business. Mostly the current and the last year is interesting for most scenarios. This means anything older than 2 years can be paged out. Here you have to choose a partition attribute. Mostly there is a time range attribute like FYEAR or GJAHR. Here you can partition by year / quarter / month. This depends on the amount of data.

Monitoring the buffer hit ratio, fill degree of partitions, performance of SQLs (may be new coding which was not analyzed) and creating new partitions for new months, quarter or years. It is an ongoing process and every new HANA revisions has its own new features.

The following query retrieves partitioned tables with range partitions older than 3 years.

select a.schema_name, a.table_name, c.part_id, replace(b.LEVEL_1_EXPRESSION,'"','') "Column",
max(c.LEVEL_1_RANGE_MIN_VALUE) "Last_Range_From",
max(c.LEVEL_1_RANGE_MAX_VALUE) "Last_Range_To",
to_decimal(d.memory_size_in_total/1024/1024/1024,2,2) as "MEM_SIZE_GB", to_decimal(d.disk_size/1024/1024/1024,2,2) as "DISK_SIZE_GB"
from m_tables a, partitioned_tables b, table_partitions c, m_table_partitions d
where a.IS_COLUMN_TABLE = 'TRUE'
and a.is_partitioned = 'TRUE'
and b.level_1_type = 'RANGE'
and a.table_name = b.table_name
and b.table_name = c.table_name
and c.table_name = d.table_name
and c.part_id = d.part_id
and d.load_unit <> 'PAGE'
and b.LEVEL_1_COUNT > 1
and c.LEVEL_1_RANGE_MAX_VALUE < add_years(current_timestamp,-4)
and not c.LEVEL_1_RANGE_MAX_VALUE = ''
group by a.schema_name, a.table_name, b.LEVEL_1_EXPRESSION, b.LEVEL_1_COUNT, c.part_id, to_decimal(d.memory_size_in_total/1024/1024/1024,2,2), to_decimal(d.disk_size/1024/1024/1024,2,2)order by a.table_name, c.part_id;

4.3 Implement

This phase consist of 3 sub-phases:

  • Configure HANA NSE Buffer Cache
  • Add objects to NSE
  • Monitor the NSE Buffer Cache

4.3.1 Configure HANA NSE Buffer Cache

The following HANA parameters should be set:

  • max_size OR max_size_rel
  • unload_threshold
InifileSectionParameterValueRemarks
global.inibuffer_cache_csmax_size10% of RAMExplicitly specifies the upper limit of the buffer cache, in MBs.
max_size_rel% of GALSpecifies the upper limit of the buffer cache as a percentage of the global allocation limit (GAL) or the service allocation limit if set.
unload_threshold80% of max_sizeSpecifies the percentage of the buffer cache’s maximum size to which it should be reduced automatically when buffer capacity is not fully used. 80% is a good starting value.

Note: When both max_size and max_size_rel are set, the system uses the smallest value. When applying the values to a scale-out system at the DATABASE layer, max_size_rel will be relative to the allocation for each host, which can be different for each.

SQL Commands to be performed in the SYSTEMDB:

ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') SET ('buffer_cache_cs','max_size') = '<size_in_MB>' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') SET ('buffer_cache_cs','max_size_rel') = '10' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') SET ('buffer_cache_cs','unload_threshold') = '80' WITH RECONFIGURE;

4.3.2 Add objects to NSE

For the identified objects (tables, columns, indexes, partitions), you can change the load unit to the required value with the following commands:

  • Table
    ALTER TABLE <TABLE_NAME> <TYPE> LOADABLE;
  • Index
    ALTER INDEX <INDEX_NAME> <TYPE> LOADABLE;
  • Column
    ALTER TABLE <TABLE_NAME> ALTER (<COLUMN> ALTER <TYPE> LOADABLE);
  • Partition
    ALTER TABLE <TABLE_NAME> ALTER PARTITION <PARTITION_ID> <TYPE> LOADABLE;

4.3.3 Monitoring the Buffer Cache

4.3.3.1 HANA Cockpit

The buffer cache can be monitored from the HANA Cockpit tile “Native Storage Extension”.

4.3.3.2 SQL Commands

Also the following views can be queried:

  • HOST_BUFFER_CACHE_STATISTICS
  • HOST_BUFFER_CACHE_POOL_STATISTICS
4.3.3.2.1 NSE Buffer Cache Usage

The following query can be used to determine the current usage of the NSE Buffer Cache. The “Used%” should be below 80:

select to_decimal((sum(USED_SIZE)/1024/1024/1024) / (avg(MAX_SIZE)/1024/1024/1024) * 100,2,2) "Used %"
from M_BUFFER_CACHE_STATISTICS

where CACHE_NAME = 'CS'
group by CACHE_NAME;
4.3.3.3.2 NSE Buffer Cache Hit Ratio

The following query can be used to determine the Hit Ratio of the NSE Buffer Cache. The “Hit Ratio %” should be above 90:

select to_decimal(avg(HIT_RATIO),2,2) "HIT_RATIO %" from M_BUFFER_CACHE_STATISTICS
where CACHE_NAME = 'CS'
and not HIT_RATIO = 0
group by CACHE_NAME;
4.3.3.4.3 Max. NSE Buffer Cache Size

The following query can be used to determine the maximum possible size of the NSE Buffer Cache, based on the current offloaded objects:

select to_decimal(sum("Size_in_GB")/1024/1024/1024,2,2) "Size_in_GB"
from
(select sum(ESTIMATED_MAX_MEMORY_SIZE_IN_TOTAL) "Size_in_GB"

from M_CS_TABLES
where LOAD_UNIT = 'PAGE'
union
select sum(UNCOMPRESSED_SIZE) "Size_in_GB"
from M_CS_COLUMNS
where LOAD_UNIT = 'PAGE'
and not TABLE_NAME in (select TABLE_NAME from M_CS_TABLES where LOAD_UNIT = 'PAGE')
union
select sum(INDEX_SIZE) "Size_in_GB"

from M_RS_INDEXES
union
select sum(MEMORY_SIZE_IN_TOTAL) "Size_in_GB"
from M_TABLE_PARTITIONS
where LOAD_UNIT = 'PAGE'
and not TABLE_NAME in (select TABLE_NAME from M_CS_TABLES where LOAD_UNIT = 'PAGE'));
4.3.3.4.4 NSE Buffer Cache Max. Size vs Configured Size

The following query can be used to determine the maximum possible size of the NSE Buffer Cache, based on the current offloaded objects versus the configured size.

The “Max vs Conf %” should be lower than 100:

select to_decimal((sum(a."Size_in_GB")/b."Max_GB")*100,2,0) as "Max_vs_Conf %"
from
(select sum(ESTIMATED_MAX_MEMORY_SIZE_IN_TOTAL) "Size_in_GB"
from M_CS_TABLES
where LOAD_UNIT = 'PAGE'
union
select sum(UNCOMPRESSED_SIZE) "Size_in_GB"
from M_CS_COLUMNS
where LOAD_UNIT = 'PAGE'
and not TABLE_NAME in (select TABLE_NAME from M_CS_TABLES where LOAD_UNIT = 'PAGE')
union
select sum(INDEX_SIZE) "Size_in_GB"
from M_RS_INDEXES
union
select sum(MEMORY_SIZE_IN_TOTAL) "Size_in_GB"
from M_TABLE_PARTITIONS
where LOAD_UNIT = 'PAGE'
and not TABLE_NAME in (select TABLE_NAME from M_CS_TABLES where LOAD_UNIT = 'PAGE')) a,
(select top 1 MAX_SIZE "Max_GB"
from _SYS_STATISTICS.HOST_BUFFER_CACHE_STATISTICS
order by SERVER_TIMESTAMP desc) b
group by b."Max_GB";
4.3.3.3 SAP Focused Run monitoring

Active monitoring of the buffer cache can also be implemented using SAP Focused Run.

The metrics should be configured for each HANA tenant database, in System Monitoring / Performance / Database Performance.

The following metrics are being monitored.

NSE Buffer Cache MetricYellowRedSQL Reference
Usage> 80> 90See NSE Buffer Cache Usage
Hit Ratio< 95< 90See NSE Buffer Cache Hit Ratio
Max. Size vs Configured Size>= 98>= 100See NSE Buffer Cache Max. Size vs Configured Size

4.3.4 Edit NSE Buffer Cache

If the NSE Buffer Cache should be resized, based on the analysis above, calculate the required size in MB.

In the HANA Cockpit this can be changed in the Buffer Cache Monitor screen, by clicking on the button “Configure Buffer Cache Size”:


Set the required values (in MB) and click “Configure”:

This can also be changed with an SQL command:
(Change the <size_in_MB> to the correct value)

ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') SET ('buffer_cache_cs','max_size') = '<size_in_MB>' WITH RECONFIGURE;

5. Performance impact

5.2 NSE Advisor

The NSE Advisor, in theory, cause an overhead on the database, however in practice we haven’t noticed a significant increase in CPU/Memory Consumption.

5.2 NSE Buffer Cache

NSE Buffer Cache sizing is an important and regular exercise.

Configuring a too large buffer cache size that would avoid any disk I/O, but also keep too many and rarely accessed warm data in memory, would lead to the best performance, but inefficient buffer cache size. If that required, it would be better to keep all data as column loadable (no NSE usage)

Configuring a too small buffer cache size where the requested warm data cannot be loaded into memory (leads to query cancellation, i. e. Out-of-Buffer event). Or, the disk I/O for warm-data processing causes an unacceptable query performance which might even affect the decrease of the overall system performance of HANA.

Note:
It is not recommended to aggressively define a very low percentage to enforce the buffer cache to release allocated memory. It may lead to the situation where the buffer cache needs to allocate memory immediately after releasing that memory.
The de-allocation and re-allocation of memory brings a lot of overhead and performance degradation in NSE.

6. Required Permissions

Analysis can be done with your personal user-id. But do set up the required HANA roles and permissions.

7. Periodic Tasks

The following tasks have to be performed regularly (at least every 6 months):

  • Monitoring
  • Check out-of-buffer events
  • Run NSE advisor

7.1 Check Out-Of-Buffer Events

When your buffer cache is not large enough to handle your workload, SAP HANA generates an alert message.

During an out-of-buffer situation, regular user tasks may be rolled back, and critical tasks (for example, recovery) use the emergency buffer pool.

Checking the OOB-events can be performed via the HANA Cockpit, in tile NSE, choose “Buffer Cache Monitor” and select tab “Out of Buffer Events”.

For each out of buffer entry, the following information is provided:

ColumnDescription
Host/PortSpecifies the host name and internal port of the HANA service.
TypeSpecifies the type of the event.
ReasonSpecifies additional information for the event (for example, the number of buffer cache used and how many buffer cache are needed).
Creation TimeSpecifies the time the event occurred.
Update TimeSpecifies the time the event was last updated.
Handle TimeSpecifies the time a user performed an operation that resolved or alleviated the issue related to the event.
StateSpecifies the current state of the event (for example, NEW, HANADLED).

Determine the reason why these OOB-events happened. This can have several causes:

ReasonPossible Solution(s)
Buffer cache size too lowIncrease the NSE Buffer Cache
Objects being accessed too frequentlyChange the load attribute, to put them into main memory again
Partition the table and determine which partitions should be put to NSE

7.3 Run NSE Advisor

Activate the NSE Advisor once every 6 months, during as respective workload, to review the current NSE settings.

This might give an indication that tables have to be added to the main memory again and/or that additional tables can be offloaded to the NSE.

8. References

Data archiving: Financial documents

This blog will explain how to archive financial documents via object FI_DOCUMNT. Generic technical setup must have been executed already, and is explained in this blog.

Object FI_DOCUMNT

Go to transaction SARA and select object FI_DOCUMNT.

Dependency schedule:

No dependencies.

Main tables that are archived:

  • BKPF (accounting document header)
  • BSAD (accounting document secondary index for customers)
  • BSAK (accounting document secondary index for vendors)
  • BSAS (accounting document secondary index for GL accounts cleared items)
  • BSEG / RFBLG (cluster for accounting document)

Technical programs and OSS notes

Write program: FI_DOCUMNT_WRI

Delete program: FI_DOCUMNT_DEL

Post processing program: FI_DOCUMNT_PST

Read program: read program in SARA refers to FAGL_ACCOUNT_ITEMS_GL. This program you can use with online data and archive files as data source

Reloading of FI_DOCUMNT was supported in the past, but not any more: 2072407 – FI_DOCUMNT: Reloading of archived data.

Relevant OSS notes:

Application specific customizing

In the application specific customizing for FI_DOCUMNT you can maintain the document retention time settings:

Executing the write run and delete run

In transaction SARA, FI_DOCUMNT select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes the company code and year. This is needed for data retrieval later on.

After the write run is done, check the logs. FI_DOCUMNT archiving has average speed, and high percentage of archiving (up to 90 to 99%). Most of the items that cannot be archived is about open items.

For the items that cannot be archived, you can use transaction FB99 to check why. Read the full explanation in this blog.

Deletion run is standard by selecting the archive file and starting the deletion run.

Post processing

It is important to run the post processing of FI_DOCUMNT. After the archiving BSAK, BSAS and BSIS secondary table records are still present with the field ARCHIV marked with ‘X’. See notes 2683107 – Archived items still exist in BSAK and 2775018 – Archived items still exist in BSAK.

Standard SAP will deal correctly with this field. But custom made reports and the average BI data analyst will not.

So run post processing directly after the deletion run.

Data retrieval

For FI_DOCUMNT multiple read programs are available. To select them press the F4 search:

Start the program and select the button Data Source, tick the archive as well and select the archive files to read:

Due to data volume the query might run long. Select the archive files with care.

Most finance programs to list data use this Data Source principle.

SARI infostructure issues

If you get issues for sessions which are in error in SARI for infostructure SAP_FI_DOC_002, read the below potential work around.

The reason we see the errors for the archive session when we attempt to refill is that there are a number of documents that already exist in the infostructures, and there cannot be duplicates in the infostructure table.

The most common cause is that the variant for the WRITE program was set so the same document got archived twice into different archive files.

What can be done? If it is OK to have the same document in different files, you can ignore the archive session entries with error in SARI if the case is as above. To avoid having duplicate keys in the infostructure in future, you can add the filename as an extra key field to the infostructure. This can be done as follows:

  • SARI -> Customizing -> SAP_FI_DOC_002 -> Display
  • Technical data
  • Change the field “File Name Processing” from ‘D’ to ‘K’.

This is untested method.

Parallel processing

There is an option to archive Financial documents in parallel processing mode. This is an advanced development option. It uses archiving object FI_DOC_PPA. For more information read OSS note 1779727 – FI_DOCUMNT: Integrating parallel process with archiving.

If your data volume is manageable and you can archive during nights and weekends, keep it simple and use FI_DOCUMNT.

Deleting SAP office documents

In your system the amount of SAP office documents is consuming more and more space. You want to clean up these tables: BCST_CAM, BCST_SR, SOC3, SOFFCONT1, SOFM, SOOD, SOOS, SOST. This blog will explain how.

Questions that will be answered are:

  • How to reduce size of tables BCST_CAM, BCST_SR, SOC3, SOFFCONT1, SOFM, SOOD, SOOS, SOST?
  • How to run the clean up programs before running RSBCS_REORG?
  • How to run program RSBCS_REORG?

SAP office documents to content server

If table SOFFCONT1 is growing fast, you can migrate data to the content server. This might be the easiest solution. Read more in this blog.

See also note 1634908 – Reduce the number of entries of table SOFFCONT1.

See OSS note 3225275 – BC-SRV-COM Guided Answer for the guided Q&A.

Preparations before running RSBCS_REORG

SAP office documents are stored in table SOFFCONT1, and can be deleted with program RSBCS_REORG. See note 966854 – Reorganization – new report. Note 988057 – Reorganization – information contained in the past a very useful PDF document that explains what to do in cases that RSBCS_REORG is not directly can delete an SAP office document. This was removed, but you can download it here:

In most cases you have to run a special program that breaks the link between the document and the data. After that is done you can delete the content. Extra explanation is in OSS note 1641830 – Reorganization of Business Communication Services data (RSBCS_REORG).

Set expiration date

Run program RSBCS_SREQ_EXPIRE (see OSS note 1478279 – Deleting send requests that have not expired) to set the expiration date when needed:

This will remove the expiry dates.

Release the send requests

Run program RSBCS_SREQ_RELEASE (see OSS note 1238740 – Deleting unreleased send requests):

Don’t select too much data, otherwise the program will dump (see note 3033843 – Program RSBCS_SREQ_RELEASE failed with dump TSV_TNEW_PAGE_ALLOC_FAILED).

This will release the send requests for deletion.

But it will only do so if a document is attached. Apply OSS note 3256308 – Initial send requests are not released to get new program RSBCS_SREQ_INITIAL_RELEASE:

This will also release the send request for deletion in case there is no document attached.

Unlink document from application

Run program RSBCS_SREQ_UNLINK (see note 1276402 – Send requests cannot be deleted due to links) to unlink the the document from the object:

Document is in SAP connect queue

Documents might still be in the SAP connect queue. Use program RSBCS_DELETE_QUEUE (see OSS note 1244680 – Deleting send requests from the queue) to remove the queue for the documents:

Document is in a folder

If the document is in a folder it might be in different sort of folder:

  • Hidden folder
  • Private folder
  • Document has GOS link
Document is in a hidden folder

Run program RSSODFRE (see OSS note 567975 – Hidden folder: Reorganization) to delete documents from the hidden folders:

Document is in a private folder

Run program RSSO_DELETE_PRIVATE (see OSS note 922671 – Deleting folder entries) to delete from private folder:

Document has GOS link

To be able to delete document with GOS link, run program RSGOSRE01 (see OSS notes 569123 – Reorganizing documents from generic object services, 1641800 – How to delete ‘Has links to GOS/Document not sent’ documents from the Hidden folder and 2016688 – RSGOSRE01: New selection parameters):

Or run program RSGOSRE02 (OSS note 2031234 – RSGOSRE02: Reorganizing documents from generic object services):

Running RSBCS_REORG

Program RSBCS_REORG will not archive, but only delete.

Test this first and check with the data owner that the documents are no longer needed.

Program RSBCS_REORG selection criteria:

Output:

Double clicking on the line will show the details for the issue why you cannot delete.

Bug fix OSS notes for RSBCS_REORG:

Custom program to mass cancel workflow items

Mass cancellation of work items is possible with the SWIA transaction as explained in this blog.

This blog will explain how you can use Z program to mass close like SWIA, but then in batch job mode, and automatically without user interaction.

The custom program

The custom program ZRSWIWILS_WI_CANCEL is basically a copy of standard SAP program RSWIWILS.

You can copy and paste the code from below source code text and add the text symbols and selection screen parameter text.

Start screen has 1 extra option more than standard SAP:

The program working explained

When the new option is selected, after the data fetching, the program simply calls function module SWW_WI_ADMIN_CANCEL to cancel the item. Since it is done in loop and without user interaction, the program can run in batch mode.

The custom program source code text

* Report ZRSWIWILS_WI_CANCEL
*----------------- Standard program RSWIWILS copy ---------------------*
*---------- This code is to extend existing functionality--------------*

REPORT zrswiwils_wi_cancel MESSAGE-ID swf_rep_base.

INCLUDE rswlfcod.
INCLUDE rswuincl.
CLASS cl_swf_rdg_dispatcher DEFINITION LOAD.

************************************************************************
*  Begin of Data                                                       *
************************************************************************
DATA: g_list_cnt TYPE sy-tabix.
DATA: zlt_wiheader TYPE swfatalvitm.
DATA: int TYPE REF TO if_swf_rep_workitem_selection.
DATA: tcode LIKE sy-tcode.
DATA: g_windows_titlebar TYPE string.
CONSTANTS: zlc_tcode  TYPE sytcode VALUE 'SWIA'.
*- type pools
TYPE-POOLS: slis.
TABLES: swfawrkitm.

*- select options
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE TEXT-a01.

SELECT-OPTIONS: id FOR swfawrkitm-wi_id.
SELECTION-SCREEN END OF BLOCK b1.

SELECTION-SCREEN BEGIN OF BLOCK b2 WITH FRAME TITLE TEXT-a02.
SELECT-OPTIONS:
    type   FOR swfawrkitm-wi_type,
    state  FOR swfawrkitm-wi_stat,
    prio   FOR swfawrkitm-wi_prio,
    dhsta  FOR swfawrkitm-wi_dh_stat,
    task   FOR swfawrkitm-wi_rh_task,
    taskg  FOR swfawrkitm-wi_rh_task NO INTERVALS.
PARAMETERS: top_only TYPE xfeld.
SELECTION-SCREEN END OF BLOCK b2.

SELECTION-SCREEN BEGIN OF BLOCK b3 WITH FRAME TITLE TEXT-a03.
SELECT-OPTIONS:
    cd FOR swfawrkitm-wi_cd,
    ct FOR swfawrkitm-wi_ct NO-EXTENSION.
SELECTION-SCREEN END OF BLOCK b3.

SELECTION-SCREEN BEGIN OF BLOCK b6 WITH FRAME TITLE TEXT-a06.
PARAMETERS: p_cancel TYPE xfeld DEFAULT ' '.
SELECTION-SCREEN END OF BLOCK b6.

SELECTION-SCREEN BEGIN OF BLOCK b4 WITH FRAME TITLE TEXT-a04.
PARAMETERS: p_more TYPE swi_params-option AS CHECKBOX.
PARAMETERS: filter TYPE swf_utl002-clsname NO-DISPLAY.
PARAMETERS: p_swia TYPE xfeld NO-DISPLAY DEFAULT space. " note 1274031
SELECTION-SCREEN END OF BLOCK b4.

SELECTION-SCREEN BEGIN OF BLOCK b5 WITH FRAME TITLE TEXT-a05.
PARAMETERS: p_maxsel TYPE tbmaxsel.
SELECTION-SCREEN END OF BLOCK b5.

*-------------------------------------------------------------
INITIALIZATION.
*-------------------------------------------------------------
  cd-low = sy-datum.
  cd-sign = 'I'.
  cd-option = 'EQ'.
  APPEND cd.

*-------------------------------------------------------------
*- F4 functionality
*-------------------------------------------------------------
AT SELECTION-SCREEN ON VALUE-REQUEST FOR task-low.
  DATA: act_object_ext TYPE rhobjects-object.

  CALL FUNCTION 'RH_SEARCH_TASK'
    IMPORTING
      act_object_ext         = act_object_ext
    EXCEPTIONS
      no_active_plvar        = 1
      no_org_object_selected = 2
      no_valid_task_type     = 3
      OTHERS                 = 4.
  IF sy-subrc EQ 0.
    task-low = act_object_ext.
  ENDIF.

AT SELECTION-SCREEN ON VALUE-REQUEST FOR task-high.
  DATA: act_object_ext TYPE rhobjects-object.

  CALL FUNCTION 'RH_SEARCH_TASK'
    IMPORTING
      act_object_ext         = act_object_ext
    EXCEPTIONS
      no_active_plvar        = 1
      no_org_object_selected = 2
      no_valid_task_type     = 3
      OTHERS                 = 4.
  IF sy-subrc EQ 0.
    task-high = act_object_ext.
  ENDIF.

*-------------------------------------------------------------
START-OF-SELECTION.
*-------------------------------------------------------------
  PERFORM main USING p_more p_maxsel.

*---------------------------------------------------------------------*
*       FORM MAIN                                                     *
*---------------------------------------------------------------------*
*       ........                                                      *
*---------------------------------------------------------------------*
FORM main USING p_more TYPE xfeld
                p_maxsel TYPE tbmaxsel.
  DATA: field_lst    TYPE slis_t_fieldcat_alv,
        field_cat    TYPE slis_fieldcat_alv,
        is_layout    TYPE slis_layout_alv,
        is_variant   LIKE disvariant,
        it_sort      TYPE slis_t_sortinfo_alv,
        l_string     TYPE string,
        l_grid_title TYPE lvc_title.

*- prepare the list format (determine columns...)
  PERFORM prepare_format CHANGING field_lst
                                  field_cat
                                  is_layout
                                  it_sort.
  is_variant-report = sy-repid.

*- get the list from the database.
  PERFORM get_workitem_header USING    p_more
                                       p_maxsel
                              CHANGING zlt_wiheader.
*- check empty
  DATA: hits TYPE i.
  DESCRIBE TABLE  zlt_wiheader LINES hits.
  IF NOT hits IS INITIAL.

*- set table layout
    is_layout-cell_merge = 'X'.

*- set title
    is_layout-window_titlebar =
                  'Workitems Cancel'(001).

    g_windows_titlebar = is_layout-window_titlebar.
    PERFORM get_title USING hits
                            g_windows_titlebar
                            l_grid_title.
    is_layout-window_titlebar = l_grid_title.

    IF p_cancel IS NOT INITIAL AND zlt_wiheader IS NOT INITIAL.
* Cancel the work item
      PERFORM cancel_workitem TABLES zlt_wiheader.
    ENDIF.
*- call the CO function to display the list.
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
      EXPORTING
        i_callback_program      = CONV syrepid( 'ZRSWIWILS_WI_CANCEL' )
        i_callback_user_command = 'CALL_UCOMM_WILIST'
        is_layout               = is_layout
        it_fieldcat             = field_lst
        it_sort                 = it_sort
        is_variant              = is_variant
      TABLES
        t_outtab                = zlt_wiheader
      EXCEPTIONS
        OTHERS                  = 1.
  ELSE.
    MESSAGE s003.
  ENDIF.

ENDFORM.                    "main

*---------------------------------------------------------------------*
*       FORM PREPARE_FORMAT                                           *
*---------------------------------------------------------------------*
*       ........                                                      *
*---------------------------------------------------------------------*
*  -->  FIELD_LST                                                     *
*  -->  FIELD_CAT                                                     *
*  -->  IS_LAYOUT                                                     *
*---------------------------------------------------------------------*
FORM prepare_format CHANGING field_lst TYPE slis_t_fieldcat_alv
                                 field_cat TYPE slis_fieldcat_alv
                                 is_layout TYPE slis_layout_alv
                                 it_sort   TYPE slis_t_sortinfo_alv.
  DATA: is_sort LIKE LINE OF it_sort.
  DATA: structure_name TYPE dd02l-tabname VALUE 'SWFAWRKITM'.
  DATA: lt_fieldcat TYPE lvc_t_fcat.
  DATA: ls_fieldcat TYPE lvc_s_fcat.
  DATA: wf_settings TYPE swp_admin.
  DATA: lh_aging_services TYPE REF TO cl_sww_daag_services.
  FIELD-SYMBOLS: <field_lst> LIKE LINE OF field_lst.

  is_layout-box_fieldname = 'B_MARKED'.
  is_layout-box_tabname   = 'WIHEADER'.

*- prepare sort
  PERFORM set_sort_tab CHANGING it_sort.

*- get fieldcatalog
  CALL FUNCTION 'LVC_FIELDCATALOG_MERGE' 
    EXPORTING
      i_structure_name       = structure_name
    CHANGING
      ct_fieldcat            = lt_fieldcat
    EXCEPTIONS
      inconsistent_interface = 1
      program_error          = 2
      OTHERS                 = 3.

  LOOP AT lt_fieldcat INTO ls_fieldcat.
    IF ls_fieldcat-fieldname EQ 'WI_RHTEXT'.
      ls_fieldcat-lowercase = 'X'.
    ENDIF.
    MOVE-CORRESPONDING ls_fieldcat TO field_cat.
    field_cat-seltext_s = ls_fieldcat-scrtext_s.
    field_cat-seltext_l = ls_fieldcat-scrtext_l.
    field_cat-seltext_m = ls_fieldcat-scrtext_m.
    APPEND field_cat TO field_lst.
  ENDLOOP.

*- no reference for B_MARKED
  CLEAR field_cat-ref_tabname.
  field_cat-tech = 'X'.
  field_cat-fieldname = 'B_MARKED'.
  APPEND field_cat TO field_lst.
  CLEAR field_cat-tech.

*- set columns positions
  LOOP AT field_lst ASSIGNING <field_lst>.
    CASE <field_lst>-fieldname.
      WHEN 'WI_ID'.
        <field_lst>-col_pos = 1.
      WHEN 'WI_STAT'.
        <field_lst>-col_pos = 2.
      WHEN 'WI_CHCKWI'.
        <field_lst>-col_pos = 3.
      WHEN 'TYPETEXT'.
        <field_lst>-col_pos = 4.
        <field_lst>-outputlen = 15.
      WHEN 'WI_RH_TASK'.
        <field_lst>-col_pos = 5.
        <field_lst>-outputlen = 11.
      WHEN 'WI_CD'.
        <field_lst>-col_pos = 6.
        <field_lst>-outputlen = 10.
      WHEN 'WI_CT'.
        <field_lst>-col_pos = 7.
        <field_lst>-outputlen = 8.
      WHEN 'WI_TEXT'.
        <field_lst>-col_pos = 8.
        <field_lst>-outputlen = 80.
      WHEN 'WI_CONFIRM'.
        <field_lst>-col_pos = 9.
      WHEN 'WI_REJECT'.
        <field_lst>-col_pos = 10.
      WHEN 'WI_PRIOTXT'.
        <field_lst>-col_pos = 11.
      WHEN 'RETRY_CNT'.
        <field_lst>-col_pos = 12.
      WHEN 'TOP_TASK'.
        <field_lst>-col_pos = 13.
      WHEN OTHERS.
        <field_lst>-col_pos = 999999.
    ENDCASE.
  ENDLOOP.
  SORT field_lst BY col_pos.
  LOOP AT field_lst ASSIGNING <field_lst>.
    <field_lst>-col_pos = sy-tabix.
  ENDLOOP.

*- set standard layout
  CLEAR field_cat.
  field_cat-no_out = 'X'.
  MODIFY field_lst FROM field_cat TRANSPORTING no_out
     WHERE fieldname EQ 'WI_LANG' OR
           fieldname EQ 'WI_TYPE' OR
           fieldname EQ 'VERSION' OR
           fieldname EQ 'WI_PRIO' OR
           fieldname EQ 'NOTE_CNT' OR
           fieldname EQ 'WI_RELEASE' OR
           fieldname EQ 'STATUSTEXT' OR
           fieldname EQ 'TCLASS' OR
           fieldname EQ 'WI_DH_STAT' OR
           fieldname EQ 'RETRY_CNT' OR
           fieldname EQ 'WLC_FLAGS' OR
           fieldname EQ 'TOP_TASK' OR
           fieldname EQ 'AGING_STATE' OR
           fieldname EQ 'AGING_TEMPERATURE'.

*- delete aging fields if not applicable
  lh_aging_services = cl_sww_daag_services=>get_instance( ).
  IF lh_aging_services->aging_enabled( ) NE 'X'.
    DELETE field_lst WHERE fieldname EQ 'AGING_STATE'
                        OR fieldname EQ 'AGING_TEMPERATURE'.
  ENDIF.

*- set checkboxes
  CLEAR field_cat.
  field_cat-checkbox = 'X'.
  MODIFY field_lst FROM field_cat TRANSPORTING checkbox
     WHERE fieldname EQ 'WI_CONFIRM' OR
           fieldname EQ 'WI_REJECT' OR
           fieldname EQ 'WI_DEADEX' OR
           fieldname EQ 'NOTE_EXIST' OR
           fieldname EQ 'ASYNCAGENT'.

  IF tcode = 'SWI2_ADM1'. " workitems ohne bearbeiter
    CLEAR field_cat.
    field_cat-col_pos = '0'.
    field_cat-key = 'X'.
    MODIFY field_lst FROM field_cat TRANSPORTING col_pos key
       WHERE fieldname EQ 'ASYNCAGENT'.
  ELSE.
    CLEAR field_cat.
    field_cat-tech = 'X'.
    MODIFY field_lst FROM field_cat TRANSPORTING tech
       WHERE fieldname EQ 'ASYNCAGENT'.
  ENDIF.

*- delete agents if necessary
  CALL FUNCTION 'SWP_ADMIN_DATA_READ' 
    IMPORTING
      wf_settings = wf_settings
    EXCEPTIONS
      OTHERS      = 1.
  IF wf_settings-no_agents = 'X'.
    field_cat-tech = 'X'.
    MODIFY field_lst FROM field_cat TRANSPORTING tech
       WHERE fieldname EQ 'EXEUSER' OR
             fieldname EQ 'FORW_BY'.
  ENDIF.

ENDFORM.                    "prepare_format

*---------------------------------------------------------------------*
*       FORM CALL_UCOMM_WILIST                                        *
*---------------------------------------------------------------------*
*       Dynamic call to process the keyboard input.                   *
*---------------------------------------------------------------------*
*  -->  UCOMM                                                         *
*  -->  SELFIELD                                                      *
*---------------------------------------------------------------------*

FORM call_ucomm_wilist   USING ucomm TYPE syucomm
                               selfield TYPE slis_selfield.
  DATA: old_list_cnt    LIKE g_list_cnt,
        s_return        LIKE swl_return,
        line_idx        LIKE sy-tabix,
        l_tabix         LIKE sy-tabix,
        linesel_cnt     LIKE sy-tabix,
        b_line_selected LIKE sy-binpt,
        ls_wiheader     TYPE LINE OF swfatalvitm.
  DATA: lt_wrkitm       TYPE swfatwrkitm.
  DATA: delta_list_cnt  TYPE sytabix.
  DATA: ls_por          TYPE sibflpor.
  DATA: lt_por          TYPE sibflport.
  DATA: lv_ucomm        TYPE syucomm.
  DATA: l_excp          TYPE REF TO cx_swf_ifs_exception.
  DATA: ls_suspend      TYPE swp_suspen.
  DATA: ls_swwwidh      TYPE swwwidh.
  DATA: l_wi_index      TYPE sytabix.
  DATA: lh_grid         TYPE REF TO cl_gui_alv_grid.
  DATA: l_grid_title    TYPE lvc_title.
  DATA: l_count         TYPE sytabix.


  PERFORM pick_line USING    selfield-tabindex
                             zlt_wiheader
                    CHANGING line_idx
                             linesel_cnt
                             b_line_selected.

  LOOP AT zlt_wiheader INTO ls_wiheader WHERE b_marked EQ 'X'.
    ls_por-catid  = swfco_objtype_bc.
    ls_por-instid = ls_wiheader-wi_id.
    APPEND ls_por TO lt_por.
  ENDLOOP.

  IF lt_por[] IS INITIAL.
    READ TABLE zlt_wiheader INDEX line_idx INTO ls_wiheader.
    ls_por-catid  = swfco_objtype_bc.
    ls_por-instid = ls_wiheader-wi_id.
    APPEND ls_por TO lt_por.
  ENDIF.

************************************************************************
*    Refresh Instancemanager                                         *
************************************************************************
  TRY.
      CALL METHOD cl_swf_run_wim_factory=>initialize( ).
    CATCH cx_swf_ifs_exception INTO l_excp.
      CALL METHOD cl_swf_utl_message=>send_message_via_exception( l_excp ).
  ENDTRY.

  CASE ucomm.
************************************************************************
*    Filter                                                            *
************************************************************************
    WHEN '&ILT'.

************************************************************************
*    Refresh                                                           *
************************************************************************
    WHEN '1REF'.
      CALL METHOD int->refresh
        IMPORTING
          ex_delta_count = delta_list_cnt.
      CALL METHOD int->get_entries
        IMPORTING
          ex_wientries = lt_wrkitm.
      CLEAR zlt_wiheader[].
      PERFORM convert_to_alv_list USING    lt_wrkitm
                                           p_more
                                  CHANGING zlt_wiheader.
      MESSAGE s811(w8) WITH delta_list_cnt.
      selfield-refresh    = 'X'.
      selfield-row_stable = 'X'.

************************************************************************
*   Pick                                                               *
************************************************************************
    WHEN '&IC1'.
      IF linesel_cnt > 1.
        MESSAGE s201(wi).
        EXIT.
      ENDIF.
      IF  line_idx     = 0.
        MESSAGE s004(0k).
        EXIT.
      ENDIF.

      READ TABLE zlt_wiheader INDEX line_idx INTO ls_wiheader.
      ls_por-catid  = 'BC'.
      ls_por-instid = ls_wiheader-wi_id.

      IF ls_wiheader-wi_type = wi_flow.
        lv_ucomm = function_wi_workflow_display.
      ELSE.
        lv_ucomm = cl_swf_rdg_dispatcher=>c_function_wi_display.
      ENDIF.
      CALL METHOD cl_swf_rdg_dispatcher=>execute_dialog_request
        EXPORTING
          im_por      = ls_por
          im_function = lv_ucomm
        EXCEPTIONS
          OTHERS      = 1.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                   WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.

    WHEN OTHERS.

*      IF  line_idx     = 0.  "--- OSS note 1422569 ---
      IF linesel_cnt   = 0.
        MESSAGE s004(0k).
        EXIT.
      ENDIF.

      CALL METHOD cl_swf_rdg_dispatcher=>execute_dialog_request_multi 
        EXPORTING
          im_por      = lt_por
          im_function = ucomm
        EXCEPTIONS
          OTHERS      = 1.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                   WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.

  ENDCASE.

  DATA: ls_layout TYPE slis_layout_alv.
  DATA: lt_filtered_entries TYPE  slis_t_filtered_entries.
  DATA: l_lines TYPE i.
  CALL FUNCTION 'REUSE_ALV_GRID_LAYOUT_INFO_GET'
    IMPORTING
      es_layout           = ls_layout
      et_filtered_entries = lt_filtered_entries
    EXCEPTIONS
      OTHERS              = 1.
  IF sy-subrc EQ 0.
    DESCRIBE TABLE lt_filtered_entries LINES l_lines.
    DESCRIBE TABLE zlt_wiheader LINES l_count.
    l_count = l_count - l_lines.
    PERFORM get_title USING l_count
                            g_windows_titlebar
                            l_grid_title.
    ls_layout-window_titlebar = l_grid_title.
    CALL FUNCTION 'REUSE_ALV_GRID_LAYOUT_INFO_SET'
      EXPORTING
        is_layout = ls_layout.
  ENDIF.

ENDFORM.                    "call_ucomm_wilist

*&---------------------------------------------------------------------*
*&      Form  set_sort_tab
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_IT_SORT  sorttab (ALV format)
*----------------------------------------------------------------------*
FORM set_sort_tab CHANGING p_it_sort TYPE slis_t_sortinfo_alv.
  DATA: is_sort TYPE slis_sortinfo_alv.

  REFRESH p_it_sort.
  is_sort-tabname = 'WIHEADER'.
  is_sort-spos = 1.
  is_sort-fieldname = 'WI_CD'.
  is_sort-down = 'X'.
  APPEND is_sort TO p_it_sort.

  is_sort-spos = 2.
  is_sort-fieldname = 'WI_CT'.
  is_sort-down = 'X'.
  APPEND is_sort TO p_it_sort.

  is_sort-spos = 3.
  is_sort-fieldname = 'WI_ID'.
  is_sort-down = 'X'.
  APPEND is_sort TO p_it_sort.

ENDFORM.                               " set_sort_tab
*&---------------------------------------------------------------------*
*&      Form  get_workitem_header
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_WI_HEADER  text
*----------------------------------------------------------------------*
FORM get_workitem_header USING    p_more      TYPE xfeld
                                  p_maxsel    TYPE tbmaxsel
                         CHANGING p_wi_header TYPE swfatalvitm.

  DATA: rangetab_for_id TYPE swfartwiid.
  DATA: rangetab_for_type TYPE swfartwitp.
  DATA: rangetab_for_creation_date TYPE swfartcrdat.
  DATA: rangetab_for_creation_time TYPE swfartcrtim.
  DATA: rangetab_for_task TYPE swfartrhtsk.
  DATA: rangetab_for_state TYPE swfartwista.
  DATA: rangetab_for_priority TYPE swfartprio.
  DATA: rangetab_for_dhsta TYPE swfartdhsta.
  DATA: lt_wrkitm TYPE swfatwrkitm.
  DATA: ls_wrkitm TYPE LINE OF swfatwrkitm.
  DATA: ls_alvitm TYPE LINE OF swfatalvitm.

  IF int IS INITIAL.
    int = cl_swf_rep_manager=>get_instance( ).
  ENDIF.

*- set selection properties
  IF p_more EQ 'X'.
    CALL METHOD int->set_property
      EXPORTING
        im_name  = if_swf_rep_workitem_selection=>c_get_administrator
        im_value = 'X'.
  ENDIF.

*- convert parameters
  rangetab_for_id[] = id[].
  rangetab_for_type[] = type[].
  rangetab_for_creation_date[] = cd[].
  rangetab_for_creation_time[] = ct[].
  rangetab_for_task[] = task[].
  rangetab_for_state[] = state[].
  rangetab_for_priority[] = prio[].
  rangetab_for_dhsta[] = dhsta[].

  CALL METHOD int->clear( ).
  CALL METHOD int->set_filter_strategy( filter ).

  CALL METHOD int->set_range_tab( rangetab_for_id ).
  CALL METHOD int->set_range_tab( rangetab_for_type ).
  CALL METHOD int->set_range_tab( rangetab_for_creation_date ).
  CALL METHOD int->set_range_tab( rangetab_for_creation_time ).
  CALL METHOD int->set_range_tab( rangetab_for_task ).
  CALL METHOD int->set_range_tab( rangetab_for_state ).
  CALL METHOD int->set_range_tab( rangetab_for_priority ).
  CALL METHOD int->set_range_tab( rangetab_for_dhsta ).
  CALL METHOD int->set_only_top_wi( top_only ).

  CALL METHOD int->set_maxsel( p_maxsel ).

  CALL METHOD int->read( ).

  CALL METHOD int->get_entries
    IMPORTING
      ex_wientries = lt_wrkitm.

  PERFORM convert_to_alv_list USING    lt_wrkitm
                                       p_more
                              CHANGING p_wi_header.


ENDFORM.                               " get_workitem_header

*---------------------------------------------------------------------*
*       FORM convert_to_alv_list                                      *
*---------------------------------------------------------------------*
*       ........                                                      *
*---------------------------------------------------------------------*
*  -->  WIHEADER                                                      *
*  -->  WIALVITM                                                      *
*---------------------------------------------------------------------*
FORM convert_to_alv_list USING    wiheader TYPE swfatwrkitm
                                  more     TYPE xfeld
                         CHANGING wialvitm TYPE swfatalvitm.
  DATA: ls_wrkitm TYPE LINE OF swfatwrkitm.
  DATA: ls_alvitm TYPE LINE OF swfatalvitm.
  DATA: lv_wi_handle TYPE REF TO if_swf_run_wim_internal.
  DATA: ls_admin TYPE swhactor.
  DATA: lt_agents TYPE tswhactor.

  LOOP AT wiheader INTO ls_wrkitm.                  
    MOVE-CORRESPONDING ls_wrkitm TO ls_alvitm.
    IF ls_wrkitm-wlc_flags O swfcr_p_asynchronous_rule.
      ls_alvitm-asyncagent = 'X'.
    ENDIF.
    IF more EQ 'X' AND ls_alvitm-wi_type EQ swfco_wi_flow.
      TRY.
          CALL METHOD cl_swf_run_wim_factory=>find_by_wiid 
            EXPORTING
              im_wiid     = ls_wrkitm-wi_id
            RECEIVING
              re_instance = lv_wi_handle.
          lt_agents = lv_wi_handle->get_administrator_agents( ).
          READ TABLE lt_agents INDEX 1 INTO ls_admin.
          ls_alvitm-adm_agent = ls_admin.
        CATCH cx_swf_run_wim.
      ENDTRY.
    ENDIF.
    IF top_only IS INITIAL.
      APPEND ls_alvitm TO wialvitm.
    ELSE.
      IF ls_alvitm-wi_chckwi IS INITIAL.
        APPEND ls_alvitm TO wialvitm.
      ENDIF.
    ENDIF.
  ENDLOOP.

ENDFORM.                    "convert_to_alv_list

*---------------------------------------------------------------------*
*       FORM PICK_LINE                                                *
*---------------------------------------------------------------------*
*       ........                                                      *
*---------------------------------------------------------------------*
*  -->  SELFIELD_IDX                                                  *
*  -->  WIHEADER                                                      *
*  -->  INDEX                                                         *
*  -->  LINESEL_CNT                                                   *
*  -->  B_OK                                                          *
*---------------------------------------------------------------------*
FORM pick_line USING    selfield_idx LIKE sy-tabix 
                            wiheader     TYPE swfatalvitm
                   CHANGING index        LIKE sy-tabix
                            linesel_cnt  LIKE sy-tabix
                            b_ok         LIKE sy-binpt.
  DATA: lines_marked LIKE sy-tabix,
        marked_idx   LIKE sy-tabix,
        cursor_line  LIKE sy-binpt.

  IF selfield_idx > 0.
    READ TABLE wiheader INDEX selfield_idx TRANSPORTING NO FIELDS.
    IF sy-subrc = 0.
      cursor_line = 'X'.
    ENDIF.
  ENDIF.

  LOOP AT wiheader TRANSPORTING NO FIELDS WHERE b_marked = 'X'.
    ADD 1 TO lines_marked.
    marked_idx  = sy-tabix.
  ENDLOOP.

************************************************************************
*  List tool algorithm                                                 *
************************************************************************
  IF cursor_line = 'X'.
    index = selfield_idx.
    IF lines_marked < 2.
      linesel_cnt = 1.
    ELSE.
      linesel_cnt = lines_marked.
    ENDIF.
    b_ok = 'X'.

  ELSEIF lines_marked = 1.
    index = marked_idx.
    linesel_cnt = 1.
    b_ok = 'X'.

  ELSEIF lines_marked > 1.
    linesel_cnt = lines_marked.
    b_ok = 'X'.

  ENDIF.

ENDFORM.                    "pick_line
*&---------------------------------------------------------------------*
*&      Form  GET_TITLE
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_HITS  text
*      -->P_L_STRING  text
*----------------------------------------------------------------------*
FORM get_title  USING p_hits TYPE sytabix
                      p_title_template TYPE string
                      p_title TYPE lvc_title.
  DATA: l_string TYPE string.
  DATA: l_count(10) TYPE n.

  IF p_hits > 1.
    l_string = '(&1 entries)'(014).
  ELSEIF p_hits EQ 1.
    l_string = '(1 entry)'(015).
  ENDIF.
  l_count = p_hits.
  SHIFT l_count LEFT DELETING LEADING '0'.
  CONCATENATE p_title_template l_string INTO p_title SEPARATED BY space.
  REPLACE '&1' IN p_title WITH l_count.

ENDFORM.                    " GET_TITLE
*&---------------------------------------------------------------------*
*&      Form  CANCEL_WORKITEM
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_ZLT_WIHEADER  text
*----------------------------------------------------------------------*
FORM cancel_workitem  TABLES p_zlt_wiheader TYPE swfatalvitm.
  TYPES: zlty_status TYPE RANGE OF sww_wistat.
  DATA: zlt_r_status TYPE RANGE OF sww_wistat.

  CONSTANTS: zlc_stat_complete TYPE sww_statxt VALUE 'COMPLETED',
             zlc_stat_cancel   TYPE sww_statxt VALUE 'CANCELLED',
             zlc_sign          TYPE  ddsign VALUE 'I',
             zlc_options       TYPE ddoption VALUE 'EQ'.

  zlt_r_status = VALUE zlty_status(
                       LET s = zlc_sign
                           o = zlc_options
                           IN sign   = s
                              option = o
                              ( low = zlc_stat_complete )
                              ( low = zlc_stat_cancel ) ).
*//Keep only the work items which has to be CANCELLED
  DELETE p_zlt_wiheader WHERE wi_stat IN zlt_r_status.

  IF p_zlt_wiheader[] IS NOT INITIAL.
    LOOP AT p_zlt_wiheader ASSIGNING FIELD-SYMBOL(<zlfs_wiheader>). 
      TRY.
          CALL FUNCTION 'SWW_WI_ADMIN_CANCEL' 
            EXPORTING
              wi_id                       = <zlfs_wiheader>-wi_id
              do_commit                   = abap_true
            IMPORTING
              new_status                  = <zlfs_wiheader>-wi_stat
            EXCEPTIONS
              update_failed               = 1
              no_authorization            = 2
              infeasible_state_transition = 3
              OTHERS                      = 4.
*- exception handling
        CATCH cx_swf_run_wim INTO DATA(lv_excp).
          DATA: zlr_txmgr TYPE REF TO cl_swf_run_transaction_manager.
          CALL METHOD zlr_txmgr->rollback( ). 
      ENDTRY.
    ENDLOOP.
  ENDIF.
ENDFORM.

add parameter

call SWW_WI_ADMIN_CANCEL in loop

Analysis of SYS_LOB* tables

When you are trying to manage your database size, you might see large SYS_LOB* tables on Oracle based systems.

Questions that will be answered in this blog are:

  • How can I find large SYS_LOB tables in my Oracle based SAP system?
  • How can I find the real table for the SYS_LOB?

Finding SYS_LOB table size

In DB02 select all segments larger than 1 GB with name SYS_LOB*:

Output might be like this:

On the tab Lobs you can see the table:

From SYS_LOB name to table

Next to the direct view in DB02 as shown above, OSS note 2142497 – What to do for large lobsegment on Oracle database? describes several ways of determining the translation from the SYS_LOB label to the real table.

You can use ST04 SQL query and fire this query: replace the SYS_LOB number from the one in your own system:

SELECT OWNER, TABLE_NAME, COLUMN_NAME FROM DBA_LOBS
WHERE 'SYS_LOB0007061918C00006$$' IN (SEGMENT_NAME, INDEX_NAME);

Or in ST04, you launch the DBA view for DBA_LOBS:

Result:

In this case table DMC_INDXCL is the source of the SYS_LOB. Knowing this it will easily lead you to OSS note 2704146 – How to delete data in DMC_INDXCL? – SLT for clean up.

Frequent SYS_LOB tables

Frequent SYS_LOB tables and solutions:

Data archiving: data retrieval

When you perform data archiving, from time to time you need to give support on data retrieval issues.

This blog will explain some of the general data retrieval concepts.

Questions that will be answered in this blog are:

  • How does single record retrieval work?
  • How can I use the archive explorer?
  • How can I get a list of data from the archive?

Single record retrieval

Single record retrieval is different per archiving object.

Some objects (like FI_DOCUMNT) are nicely integrated. In FB03 the system will check first database, then look into the archive inforecords to find if the document is archived. And then it will show the document in same layout.

Most objects have archive read program which you can find in SARA:

Now run the read program:

And fill out the record(s) you need:

Now you need to select the data files:

If you didn't label your files correctly, you need to select them all, which makes data retrieval slow.

Results are shown:

Results might look ok, or very basic. This is different per archiving object.

Use of archive explorer for table level

An alternative way is the use of the archive explorer. This will give details on table level.

Start transaction SARE:

Fill out the required object and archive infostructure. In this case we used change document. In the second screen fill the object:

Now you can see list of changes:

Double click on the record to see the tables:

Double clicking on the table will give the actual table line content.

Filling infostructures

More on infostructures can be read in this dedicated blog.

List transactions

Some transactions (especially in FICO domain) have integrated reporting with the data archive. We will use transaction FBL3N as example.

Start FBL3N:

Then click on Data Sources, include Archive, and select the needed files:

If you didn't label your files correctly, you need to select them all, which makes data retrieval slow.

Data archiving improvement notes 2018

In 2018 SAP ran an improvement project which resulted into a set of OSS notes that will make data archiving more robust and easy.

All of these notes come with manual work. Select the ones really useful.

Archiving write process improvements

Write variant maintenance has been made easier by allowing copying of variants (useful if you have many plants and company codes and want to store each one in different archive file): 2520093 – Archive administration: Enhanced variant maintenance (writing, preprocessing, and postprocessing).

To be able to detail the written file name of the archive file implement this oss note: 2637105 – Print list for archiving write jobs: Placeholders for session numbers, archive file key in title.

Archiving storage process improvements

Archiving system technical check button is available in OAC0, but not in SARA. After applying this note you can also check it in the technical settings in SARA: 2599263 – Connection test for storage systems for archiving object.

Deletion process improvements

To be able to quickly continue with interrupted archiving sessions apply this note 2520094 – Continue: Information on existence of interrupted or incomplete archiving sessions.

This note will implement checks to warn you about uncompleted previous store and delete runs: 2586921 – Run selection for deletion: Information about the existence of unstored archive files.

Some archiving object use the AIS (archiving information system) to enable the end user a quick retrieval of archiving information. This note will give warning before start of deletion if the AIS is note active for the object: 2624077 – Starting delete jobs: Check for active info structures.

Archiving overview and logging improvement

To get a better overall overview of all logs apply OSS note 2433546 – Archive administration logs: Information about errors in hierarchy display. Showing only success message is possible after applying OSS note 2855641 – Logs: New option “Success Messages Only” for detail log.

Direct navigation to Archive File Browser: apply OSS note 2544517 – Archive administration: Direct navigation to ArchiveFileBrowser. This note only gives you a link. You can already start the archive file browser using transaction AS_AFB:

Archive file browser

Note 2823924 – Archive File Browser: Messages that do not belong to the Archive File Browser are output solves a bug in the Archive File Browser.

SAP database growth control: HANA data aging

HANA data aging is a method to reduce the memory footprint of the HANA in-memory part without disturbing the end users. It is not reducing your database size.

Data aging as technique is still present, but has been superseded by the HANA NSE technique. See note 2799997 – FAQ: SAP HANA Native Storage Extension (NSE).

This blog will answer following questions:

  • What is HANA data aging?
  • How to switch HANA data aging on?
  • How to set up HANA data aging for technical objects?
  • What about data aging for functional objects?

What is HANA data aging?

HANA data aging is an application method to reduce the memory footprint based on application data logic. It is not a database feature but an application feature. The goal of HANA data aging is not to reduce the database size (which it is not doing), but to reduce the actual memory footprint of the HANA in-memory database.

Let’s take idocs as example: the idocs that are processed ok you need to keep in database for an agreed amount of time before business or audit allows you to delete them. Lets say you can only delete after 1 year. Every action on idocs now means that full year of idoc content is occupying main memory. For daily operational tasks you normally only need 2 months of data in memory and rest you can accept that it will take bit longer to read from disc into memory.

This is exactly what data aging is doing: you partition the data into application logic based chunks. In this case you can partition the idoc data per month and only have last 2 months in active memory. The other 10 months are on disc only. Reading data of last 2 months is still fast as usual. When having to report on the 10 months on disc, the system first needs to load from disc into memory; will be slower.

To reduce database itself, you would still need to do data archiving.

Advantage of the data aging is that the more expensive memory footprint costs can be reduced in such a way that the end users are not hampered. Data aging is transparent for them. With data archiving the users will always need to select different transaction and data files.

How to switch on data aging?

To switch on data aging on system level you need to do 2 things:

  1. Set the parameter abap/data_aging to on in RZ11
  2. In SFW5 switch on the switch called DAAG_DATA_AGING

This only enables the system for data aging.

Data aging switch on for technical object: example for application logging

With transaction DAGADM you can see the administration status of the data aging object. You first see red lights that the objects are not activated for data aging.

Per object you have extra transactions (which unfortunately differ per object…) to set the retention times. For application logging this is transaction SLGR. Here we choose in this example to data age all log after 180 days:

The advantage of this tailoring is that you could only age some of the objects if you want.

The transaction and OSS note for each of the objects can be found on this SAP blog.

Next step is to setup partitions for the object. To do this start transaction DAGPTM and open the object you want to partition:

SBAL partitioning

Initial screen is in display mode. Hit change button. On the bottom right side hit the Period button (Selection Time Period). In the popup enter the desired start date, time buckets (months, years) and amount of repetitions:

Partition intervals

Now the partitions are defined. To execute the partitioning hit the execute button to start the partitioning in the background. Wait until the job finishes. Before running this on productive system check the runtime first on non-productive system with about same data size if possible.

After partitioning the screen should look like this:

Now we can activate the object in transaction DAGADM. Select the object and press the activate button. Popup appears to assign the object to existing data aging or new group:

The data aging run will be done per group.

To start the actual data aging run start transaction DAGRUN.

Here you can schedule a new run with the Schedule new run button.

To see the achieved results of the data aging go to transaction DAGADM and select the object. Then push the button View current/Historical data.

Functional data aging objects

Functional data archiving objects exist as well for Financial documents, sales orders, deliveries, etc. The full list and minimal application version can be found on this SAP blog.

Words of caution for functional archiving:

  • The technical archiving objects are more mature in coding and usage. They are used in productive system and are with lesser bugs than the technical objects
  • Before switching on a functional data aging object you need to prepare your custom ABAP code. If they are not adjusted properly to take the partitions with the date selections (or other application selection mechanism) into account all benefits are immediately lost. A Z program that reads constantly into full history will force a continuous read of historical partitions….

Background information

More background information can be found in OSS notes 2416490 – FAQ: SAP HANA Data Aging in SAP S/4HANA and 2869647 – Guidance for use of Data Aging in SAP S/4HANA.

Bug fix and other OSS notes

SAP database growth control: getting insight

This blog will explain about getting insight into SAP database growth and controlling the growth.

Questions that will be answered are:

  • Do I have a database growth issue?
  • What are my largest tables?
  • How do I categorize my tables?

Why control database growth?

Controlling database growth has several reasons:

  • When converting to S/4 HANA you could end up with smaller physical HANA blade and need to buy less memory licenses from SAP
  • Less data storage leads to less costs (think also about production data copied back to acceptance, development and sandbox systems)
  • Back up / restore procedures are longer with large databases
  • Performance is better with smaller databases

Database growth

The most easy way to check if the database is growing too fast or not is using the Database Growth section in the SAP EWA (early watch alert). The EWA has both graphical and table representation for the growth:

EWA database growth picture

EWA database growth table

Or you can check online in the EWA workspace.

You now have to determine if the growth is acceptable or not. This depends a bit on the usage of the system, amount of users, business data, and if you already stretched your infrastructure or not.

General rules of thumb: 

1. Growth < 1 GB/month: do not spend time.
2. Growth > 1 GB/month and < 5 GB/month: implement technical clean up.
3. Growth > 5 GB/month: implement technical clean up and check for functional archiving opportunities.

Which are my largest tables?

To find the largest tables and indexes in your system start transaction DB02. In here select the option Space/Segments/Detailed Analysis and select all tables larger than 1 GB (or 1000 MB):

DB02 selection of tables larger than 1 GB

Then wait for the results and sort the results by size:

DB02 sorted by size

You can also download the full list.

Data volume management on me.sap.com

The me.sap.com site has a data volume management page for your systems.

Go to https://me.sap.com/dataoverview.

For the background on this page, you can read this dedicated blog.

Analysis of the large  tables

Processing of the tables is usually done by starting with the largest tables first.

You can divide the tables in following categories:

  1. Technical data: deletion and clean up can be done (logging you don’t want any more like some idoc types, application logging older than 2 years, etc): see blog on technical clean up
  2. Technical data: archiving or storing can be done (idocs you must store, but don’t need fast access to, attachments)
    1. In Oracle based systems, you might find large SYS_LOB tables. To analyze these, read this special blog.
  3. Functional data: archiving might be done here: for functional data archiving see this blog on technical execution, and this blog for business discussions.

SAP data management guide

SAP has a best practice document called “Data Management Guide for
SAP Business Suite” or “DVM guide”. This document is updated every quarter to half year. The publication location is bit hidden by SAP under their DVM (data volume management) service. In the bottom here go to SAP support and open the How-to-guides section. Or search on google with the term “Data Management Guide for SAP Business Suite” (you might end up with a bit older version). The guide is giving you options per large table to delete and/or archive data.

Common technical objects

Most common technical tables you will come across:

  • EDIDC, EDIDS, EDI40: idocs
  • DBTABLOG: table changes
  • BALHDR, BALDAT: application logging
  • SWW* (all that start with SWW): workflow tables
  • SYS_LOB…..$$: attachments (office attachments and/or DB storage of attachments and/or GOS, global object services attachments)

Detailed table analysis for functional tables: TAANA tool

For detailed analysis on functional tables the TAANA (table analysis) tool can be used. Simply start transaction TAANA.

Now create a table analysis variant by giving the table name and selection of the analysis variant:

TAANA start screen

The default variant will only do a record count. Some tables (like BKPF in this example) come with a predefined ARCHIVE variant. This is most useful option. If this option does not fit your need, you can also push the create Ad Hoc Report button and define your own variant.

Caution: with the ad hoc variant select your fields with care, since the analysis will count all combinations of fields you select. Never select table key fields

Results of TAANA are visible after the TAANA batch job is finished.

TAANA result

By running the proper TAANA analysis for a large functional table you get insight into the distribution per year, company code, plant, document type etc. This will help you also estimate the benefits of archiving a specific object.

For TAANA improvement on dynamic subfields, please check this blog.

If you run on HANA, you can also use SE16H for the table analysis.

Bug fix OSS note for TAANA:

Alternatives for TAANA

If you are running on HANA, you can also check using SE16H and SE16S.

From analysis to action

For the technical clean up read the special blog on this topic. For functional objects, you need to find the relation from the table to the functional data archiving object. This relation and how to find it is clearly explained in OSS note 2607963 – How to find the relationship between table and archive object.

There can also be 3rd party add on data objects with corresponding large tables. SAP will not support any clean up and/or data archiving for these objects and tables. See OSS notes 3480800 – How-To: Data Management of the /IDT/D_LOG table and 3486470 – Identifying how to archive a table that is not handled by SAP.

HANA NSE

One of the easier technical ways to lower the volume of data on a live HANA system is to setup HANA NSE (native storage extension). This will not reduce the data volume, but will reduce the amount of data in memory by keeping old data on disc. Read more in this blog.

SAP data volume management via SAP solution manager

SAP is offering option to report on data volume management via SAP solution manager directly or as a subsection in the EWA. Experience so far with this: too long in setup, too buggy. The methods described above are much, much faster and you get insight into a matter of hours. The DVM setup will take you hours to do and days/weeks to wait for results…. TAANA and SE16H are way faster.

Or use the new DVM on me.sap.com: see this blog.