If you are running SCM – APO solution, you need to consider what to do with the functions running there. The extended support of the SCM – APO solution will end by 31.12.2030.
To check the current usage and potential solutions, you can run the SAP readiness check for SAP Supply Chain Solution.
This blog is a continuation of the generic setup explained in the blog on this page.
Many SAP components running on Linux (Host Agent, Web Dispatcher, Message Server, and HANA) store their certificates in local PSE files instead of STRUST. To automate certificate enrollment and renewal, these components use two tools:
sapgenpse → creates PSE files and generates CSRs
sapslcscli → enrolls and renews certificates through SAP Secure Login Server (SLS)
This page describes the generic and simplified approach for automating certificate enrollment and renewal for:
SAP Host Agent
SAP Web Dispatcher
SAP Message Server / ICM (Linux)
SAP HANA pse
All of them use the same command‑line workflow.
Process:
1. Create the Registration PSE (ra.pse)
This PSE is used by the system to authenticate with Secure Login Server (SLS).
-g 365 = renew 365 days before expiry. (you can adjust it) Each SLS certificate profile has its own metadata URL. Renewal replaces the certificate in the PSE automatically
7. Automate with a Cron Job (Linux Scheduler)
To enable automation, configure a cron job that periodically executes the renewal command with the corresponding SLS certificate profile.
We use background job in SAP ABAP and Job Scheduler in SAP JAVA system. In case of non-ABAP non-JAVA systems we can use OS tool like Cron jobs to execute this script on daily basis.
Explanation of Each Placeholder:
Placeholder
Meaning
<PSE_PATH>
Directory where PSE files are stored (e.g., /usr/sap/<SID>/<INSTANCE>/sec)
<sapslcscli_path>
Directory containing the sapslcscli executable (e.g., /usr/sap/<SID>/<INSTANCE>/exe)
<SLS_Renewal_Profile_URL>
Metadata URL of the specific SLS certificate profile used for renewal
-g 30
Renews the certificate 30 days before expiry (you can adjust this)
Now all certificates can be renewed automatically using command line tool of SLS server and it will save manual efforts by enabling automation using SLS certificate life cycle management functionality.
This blog is a continuation of the generic setup explained in the blog on this page.
To enable certificate automation in an AS Java system, you must first deploy the Secure Login Library 3.0. (official help.sap.com link). The operating‑system‑independent package is delivered as an SCA file, which can be installed on AS Java through the telnet deployment tool.
Once deployed, the system automatically provides access to the CLM application, available at: https:/<host>:<port>/sapsso/clm
In the NWA, go to Configuration → Certificates and Keys. Under Key Storage, open Security → Permissions by Domain, search for the CLM application, and grant it full access to the keystore views that should be renewed automatically.
Open the CLM application in the browser. Start by registering the system (initial enrollment).
Enter the metadata URL and click Fetch.
When prompted, log in with a user allowed to perform CLM enrollment.
Click Register, then Save
To renew certificates, go to the Enrollment tile.
The metadata URL is already stored, so select the keystore view and the certificate you want to renew.
After choosing Enroll Certificates, the updated certificates appear.
You can review them using Show Details.
Finally, you may create a scheduled task so renewals run automatically.
The task executes under the currently logged‑in user.
Conclusion
After this configuration, the certificates in AS Java should be renewed regularly before reaching the end of their lifetime.
To organize certificate profiles and control which SAP systems participate in the automation process, you need to create an Application Server Profile Group in the Secure Login Administration Console (SLAC).
This blog is a continuation of the generic setup explained in the blog on this page.
In SLAC, navigate to: Application Server Profile Groups → Create New Group
Give the group a name that clearly identifies its purpose.
2. Assign System Identifiers
Under System Identifiers, add the SIDs of all SAP systems that will participate in certificate renewal. Important notes: SIDs can contain uppercase letters and digits.
3. Configuration on AS ABAP
Configuring ABAP for CLM automation involves running two key reports. Most recent NetWeaver releases already include them, but to ensure you are using the latest versions, SAP recommends implementing the corrections from SAP Note 2452425 – Collective Note – SAP SSO Certificate Lifecycle Management for ABAP.
Run the SSF_CERT_ENROLL Report
This report performs the initial connection between the ABAP system and Secure Login Server (SLS).
Metadata URL of the Application Server Profile Group in SLS
Technical user with password authentication that is authorized to perform the enrollment
Running this report enrolls the system for the first time and retrieves the required certificate information.
Once the enrollment report has been executed successfully, the system displays an overview of all certificate objects that are available for renewal. This screen also shows the certificate profiles assigned to the different PSEs.
You will see a list of certificate entries along with the available certificate profiles. For each entry:
Choose the appropriate certificate profile
Select the certificates you want to renew
This allows you to control exactly which PSEs should be processed.
2. Execute the Renewal
After starting the renewal process, the system will update the selected certificates through the Secure Login Server. When the renewal completes successfully, you should receive a confirmation message for each certificate that was processed.
3. Schedule Automated Renewal
To avoid manual renewals in the future, you should save the selection in a variant and set up a scheduled job of program SSF_CERT_RENEW. You need a batch job per application server and per certificate type.
Conclusion
After this configuration, the certificates in AS ABAP should be renewed regularly before reaching the end of their lifetime.
Digital certificates play a critical role in securing SAP systems—whether for HTTPs communication, SSO, Cloud Connector integration, or internal system‑to‑system traffic. As organizations scale, manual certificate renewal becomes error‑prone and difficult to manage.
This guide explains how to automate the entire lifecycle of SAP certificate monitoring, renewal, deployment, and validation using standard tools available on any SAP system. The approach is applicable to SAP ABAP, Java, HANA, Web Dispatcher, Content Server and Host Agent environments.
This blog will explain the overview and generic setup. The follow up blogs will describe:
SAP Secure Login Server (SLS) is part of the SAP Single Sign-On product. This is a licensed product, for more information read OSS note 1876552 – License Requirement and Download of Single Sign-On 3.0 from the SAP Software Center. Within SSO, SLS serves as the central component for Certificate Lifecycle Management (CLM), enabling automated renewal of certificates used by:
AS ABAP systems
AS Java systems
SAP Web Dispatcher
SAP Host Agent
Other SAP components requiring HTTPS or SSL
For automation, SAP SLS communicates with a PKI infrastructure. A widely used option is Microsoft NDES, which supports enrollment using the SCEP (Simple Certificate Enrollment Protocol).
This blog describes the configuration in the Secure Login Server and how to connect an AS ABAP as well as an AS Java. Configuring a Remote CA.
This blog assumes that you are familiar with the general Certificate Lifecycle Management process. Before setting up Certificate Lifecycle Management (CLM) with Secure Login Server (SLS), make sure the following requirements are met:
1.1 Secure Login Server Installed You need a working installation of SAP Secure Login Server. SLS can run on any supported, modern SAP NetWeaver AS Java system. Read OSS note 3529951 – SAP Single Sign On 3.0 product compatibility for exact specifications.
1.2 Administrative Access You need a user who can access the Secure Login Administration Console (SLAC) to configure CLM settings.
1.3. Supported SAP Releases Your SAP systems must support the CLM client components:
A user with SLAC_CERT_ADMIN/SLCLM_ADMIN/SLAC_SUPERADMIN or equivalent permissions in the Secure Login Administration Console.
2. Configuration of Destination (Connecting SLS to the PKI / NDES Server
(This step must be completed before creating certificate profiles) Before defining any certificate profiles in Secure Login Server (SLS), you must configure a destination that allows SLS to communicate with your PKI infrastructure (typically Microsoft NDES). SLS uses this destination to forward CSR requests and to retrieve issued certificates, acting as a broker between SAP systems and the PKI server. This is a mandatory step because without a properly configured destination, SLS cannot reach the PKI, and no certificate enrollment or renewal will work.
Open the Destinations Area in NWA In the SLS (AS Java) system:
Go to NetWeaver Administrator (NWA) Navigate to: Configuration → Infrastructure → Destinations
This area allows you to create HTTP/S destinations that SAP components use to reach external systems.
Create a New Destination Create a new destination dedicated to your PKI/NDES server. Recommended Naming Convention Use a name that reflects your PKI or environment (e.g.,): NDES_PROD NDES_QA PKI_SCEP
This improves clarity when assigning the destination later in SLS certificate profiles.
Configure Destination Type Set the destination type to: HTTP Destination, Since NDES SCEP endpoints are exposed over HTTPs, this ensures compatibility.
Enter the SCEP URL of the NDES Server In the Connection Information section, enter the full SCEP endpoint URL of your PKI/NDES server. For example (generic format): https://<ndes-hostname>/certsrv/mscep/
This URL points to the NDES SCEP service used for certificate enrollment.
(Optional) Configure Logon Data Depending on your PKI security configuration, you may need to configure:
Basic Authentication with a technical service account Certificates for mutual TLS. No authentication if internal trust is used. This must align with your PKI design.
Test the Destination After configuring:
Click “Ping Destination” Ensure the HTTP/HTTPs connection is successful If the response is reachable, SLS can now communicate with NDES
Remark: even though HTTPs is configured, the success message still shows HTTP.
SLS relies on this destination for all CLM certificate operations.
Why this destination is critical This destination is used internally by SLS for:
Forwarding Certificate Signing Requests (CSR) to NDES Receiving signed certificates Validating enrollment responses Triggering certificate renewals Communicating over the SCEP protocol
Without this destination, all profiles (TLS, SNC, Client) and enrollment workflows will fail because SLS would have no path to the PKI infrastructure.
How It Fits Into the Overall CLM Workflow Once the destination is created: SAP System → SLS → (Destination) → NDES/PKI → CA → SLS → SAP System
3.1 Configuring Secure Login Server for CLM
Before you configure anything in the Secure Login Administration Console (SLAC), you must prepare SLS to accept certificate‑based authentication from SAP systems. This requires creating a dedicated logon stack.
The formal SAP help document link can be found here. The steps below describe the summary main steps with clarifying screen shots.
1. Create a Logon Stack in SLS
1.1 Create a Logon Stack in NWA
Begin by opening the NetWeaver Administrator (NWA) of the Secure Login Server and navigating to:
Configuration → Authentication and Single Sign‑On → Logon Stacks
Here you create a new logon stack. Choose a name that clearly indicates it is intended for CLM system authentication, such as Client_Cert_CLM
1.2 Steps to Create the Logon Stack
Open the NetWeaver Administrator (NWA) for SLS.
Navigate to: Configuration → Authentication and Single Sign‑On → Logon Stacks
Create a new logon stack with a name that clearly indicates it is intended for CLM client authentication (for example, CLM_ClientAuth).
Add a single login module to this stack: SecureLoginModuleUserDelegationWithSSL
**Configure the Three Key Attributes** This module requires three important attributes that control how certificates are validated: 🔹 Rule1.subjectName
Used to filter acceptable certificate subjects using a regular expression. If you set it to the wildcard pattern: (.*)
it effectively accepts any subject. Even with the wildcard, the Common Name (CN) of the system certificate must still match one of the system entries you later configure in the Secure Login Administration Console (SLAC).
🔹 Rule1.issuerName
Controls which certificate issuers are allowed. Typically also set to: (.*)
This disables strict issuer filtering and allows any issuer that SLS already trusts through its root CA store.
🔹 UserMappingMode
Must be set to: VirtualUser
SAP systems do not exist as users in the UME. Therefore, SLS maps each system certificate to a virtual identity, and these identities are filtered and managed later in SLAC.
1.3. Purpose of the Logon Stack
This logon stack is used by every application profile in CLM. It ensures that SAP systems can authenticate securely when requesting new or renewed certificates. This logon stack forms the authentication foundation for all certificate lifecycle operations in SLS. Every CLM application profile will reference it. By configuring:
The SecureLoginModuleUserDelegationWithSSL module, The subjectName and issuerName rules, and Virtual user mapping
You ensure that: ✔ SAP systems authenticate themselves correctly via SSL client certificates ✔ Certificate enrollment and renewal requests are validated through the defined rules ✔ System‑to‑certificate mapping is handled through virtual identities (since systems do not exist in UME) ✔ CLM workflows function seamlessly from initial enrollment through automatic renewal ✔ Every SAP component—ABAP, Java, Web Dispatcher, Host Agent, HANA—can make use of this same logon stack This logon stack is therefore reused across all application profiles in CLM, ensuring consistency and secure handling of certificate requests throughout the entire SAP landscape.
1.4 Configuration of Certificate Profiles
After preparing the logon stack, the next step in setting up Certificate Lifecycle Management (CLM) is to define the certificate profiles inside the Secure Login Server (SLS). These profiles describe what type of certificates SAP systems should request and how SLS should communicate with the backend CA/NDES to obtain them. Each certificate profile represents a specific certificate purpose (such as SNC or TLS) and includes the rules that control certificate enrollment, renewal, and template mapping.
Registration Agent Profile (Used for Enrollment Requests) The first profile that must be created is the Registration Agent (RA) profile. This profile is responsible for handling initial certificate enrollment requests coming from SAP systems. It acts as the “front door” for: Receiving enrollment metadata from SAP components, forwarding Certificate Signing Requests (CSR) to NDES and Returning the signed certificate back to the SAP system. Every system type—ABAP, Java, HANA, Web Dispatcher, Host Agent—uses this RA profile during enrollment.
fill the required entry as per organization details. and CA for issuing certificates create PKI structure under certificate management
Create Certificate Profiles for Each Required Certificate Type
✔ Initial_TLS_Cert_SAN
Used for First‑Time SAN‑Based Enrollment & Renewal and This profile is required for the very first certificate issuance when: You are enrolling a system for the first time and want the initial certificate to follow SAN requirements (as enforced by modern PKI policies)
✔ SNC Certificate Profile
Used for systems that use Secure Network Communication (SNC) with X.509 certificates.
✔ TLS Server Certificate Profile
Disclaimer: This page covers only the architecture and configuration required for SAP certificate automation. For the full enrollment and renewal procedure, please refer to 👉 “SAP Certificate Enrollment & Renewal Automation Process” page dedicated to the operational workflow.
The SAP tool for ABAP system usage SCMON is great. The problem is that the data is a lot to digest. The SUSG function can aggregate the data, but that is difficult to view.
The below Z program is simply taking the SCMON data and is providing a summarized view.
Transaction ZSCMON and program ZSCMON
Create a program ZSCMON and assign transaction code ZSCMON as well.
Report code:
REPORT zscmon.
CONSTANTS : zgc_tcode TYPE tcode VALUE 'ZSCMON', zgc_col TYPE lvc_fname VALUE 'NUMBEROFCALLS'.
DATA: zgv_objtyp TYPE trobjtype, zgv_objnme TYPE sobj_name, zgv_clsnme TYPE scmon_classname.
PARAMETERS: zp_stdat TYPE scmon_slicestartdate, zp_eddat TYPE scmon_sliceenddate.
SELECT-OPTIONS: zs_obtyp FOR zgv_objtyp, zs_obnme FOR zgv_objnme, zs_clsnm FOR zgv_clsnme.
PARAMETERS: zp_sum RADIOBUTTON GROUP g1 DEFAULT 'X', zp_det RADIOBUTTON GROUP g1.
INITIALIZATION. * AUTHORITY CHECK for TCODE CALL FUNCTION 'AUTHORITY_CHECK_TCODE' EXPORTING tcode = zgc_tcode EXCEPTIONS ok = 0 not_ok = 2 OTHERS = 3. IF sy-subrc <> 0. MESSAGE e172(00) WITH zgc_tcode. ENDIF.
AT SELECTION-SCREEN. IF zp_eddat LT zp_stdat. MESSAGE TEXT-001 TYPE 'E'. "Slice End Date cannot be less that Slice Start Date ENDIF.
START-OF-SELECTION.
IF zp_det EQ abap_true. SELECT object, obj_name, proctype, procname, classname, SUM( counter ) AS numberofcalls FROM scmon_vdata INTO TABLE @DATA(zgt_scmond) WHERE slicestart GE @zp_stdat AND sliceend LE @zp_eddat AND object IN @zs_obtyp[] AND obj_name IN @zs_obnme[] AND classname IN @zs_clsnm[] GROUP BY object, obj_name, proctype, procname, classname.
ELSEIF zp_sum EQ abap_true. SELECT object, obj_name, SUM( counter ) AS numberofcalls FROM scmon_vdata INTO TABLE @DATA(zgt_scmons) WHERE slicestart GE @zp_stdat AND sliceend LE @zp_eddat AND object IN @zs_obtyp[] AND obj_name IN @zs_obnme[] GROUP BY object, obj_name.
In S4HANA2023 and more recent versions there is a new SE16 flavour: SE16J (for setting up queries on tables with Joins). This is very powerful function next to SE16H.
Idoc change pointers can be used to set up master data distribution. Most used objects are materials, customers, vendors, GL accounts. This setup is also known as the ALE (Application Link Enabling) setup.
General activation of change pointers
Start transaction BD61 to activate change pointers in general (this is once off general activation):
Per message type the change pointer activation is done in transaction BD51:
In transaction BD52 you can see which field trigger a change pointer for that specific message type:
If you want to know (or steer) the processing function module behind, start transaction BD60 and check the details:
Distribution model and Idoc partner profile setup
In transaction BD54 you define logical systems. In our example we will use the definition SOURCE and TARGET:
The SOURCE system definition is normally connected to the main client in the SCC4 transaction:
Now we can model the data flow in BD64 distribution model.
Create the Model View first:
Then add the message type with sender, receiver and message type:
So the end result looks like this:
In WE20 in the source system, now we set up the partner profile:
The receiver port must be defined in WE21 (ports in idoc processing):
The RFC destination is maintained in SM59 as usual and contains the technical data of the target system.
In the target system the setup of the ALE model needs to be done as well, and the partner profile needs to be on the inbound side:
Testing the setup
To test the setup create a material or change one. This should trigger a change pointer.
Run program RBDMIDOC or start transaction BD21 to evaluate the change pointers:
If you run first time, best to clear all the old items with program RBDCPCLR2.If the activation was done years ago, you otherwise end up with a lot of unwanted Idocs.
When running the program for each material master change (not only yours) an Idoc is created. You can check in WE02, WE05 or WLF_IDOC if the Idoc is created correctly.
The partitioning feature of the SAP HANA database splits column-store tables horizontally into disjunctive sub-tables or partitions. In this way, large tables can be broken down into smaller, more manageable parts.
Partitioning is only available for tables located in the column store. The row store does not support partitioning.
BW systems are handled separately, please refer to chapter “BW Systems”.
1.1 Reasons and background of partitioning
Following are some of the reasons, next to the advantages described later, to perform partitioning:
In SAP HANA, a non-partitioned column store tables can’t store more than 2 billion rows.
Memory requirements are doubled at the time of the table optimization.
There is an increased risk of locking issues during table optimization.
The CPU consumption can be significant, particularly during optimize compression runs.
The I/O write load for savepoints is significant and can lead to trouble like a long critical phase (SAP Note 2100009 – FAQ: SAP HANA Savepoints)
SAP HANA NSE: Range partitions with old data can be offloaded easier.
Therefore you should avoid using particularly large tables and partitions and consider a more granular partitioning instead. A reasonable size threshold is typically 50 GB, so it can be useful to use a more granular partitioning in case this limit is exceeded.
1.2 Best Practices
The following best practices should be kept in mind:
Keep the number of partitioned tables low
Keep the number of partitions per table low (maximum 8 partitions)
Maximum 100 – 200 million rows per partition (recommended).
Define partitioning on as few columns as possible
For SAP Suite on HANA, keep all partitions on same host
Repartitioning rules: When repartitioning, choose the new number of partitions as a multiple or divider of current number of partitions.
Avoid unique constraints
Throughput time: 10-100 G/hour
HASH partitioning on a selective column being part of the primary key; check which sorting option is used mostly.
1.3 Advantages
These are some advantages of partitioning:
Load Balancing: In a distributed system. Individual partition can be distributed across multiple Hosts.
Record count: Storing more than 2 billion rows in a table.
Parallelization: Operations can be parallelized by using several execution threads.
Partition Pruning: Queries are analyzed to see if they match the given partitioning specification of a table (STATIC) or the content of specific columns in aging tables (DYNAMIC). Remark: When a table is range partitioned based on MONTH and in the WHERE clause YEAR is selected, all partitions are scanned and not only the 12 partitions belonging to the year.
Delta merge performance: Only changed partitions must be duplicated in the RAM, instead of the entire table.
1.4 Partitioning Types
The following partioning types can be used, but normally only HASH and RANGE are used:
HASH: Distribute rows to partitions equally for load balancing and to overcome the 2 billion row limitation.
ROUND-ROBIN: Achieve an equal distribution of rows to partitions.
RANGE: Dedicated partitions for certain values or value ranges in a table.
Multi-level (HASH/RANGE) First partition on level 1, than on level 2.
1.5 Parameters
The following optional parameters can be set to optimize HANA partitioning, if required.
Inifile
Section
Parameter
Value
Remark
indexserver.ini
joins
single_thread_execution_for_partitioned_tables
false
Allow parallelization
indexserver.ini
partitioning
split_threads
<number>
Parallelization number for repartitioning; 80 % of max_concurrency
indexserver.ini
table_consistency_check
check_repartitioning_consistency
true
Implicit consistency check
SQL commands:
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET ('joins',' single_thread_execution_for_partitioned_tables') = 'false' WITH RECONFIGURE; ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET ('partitioning','split_threads') = '<number>' WITH RECONFIGURE; ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET ('table_consistency_check','check_repartitioning_consistency') = 'true' WITH RECONFIGURE;
1.6 Privileges
The following privileges should be granted to the user executing the partitioning:
System privilege: PARTITION_ADMIN
For the user examining the partitioning, the following privilege might also be of interest:
Whenever a table is queried and if that table or partition is not present in memory HANA automatically loads it into the memory either partially or fully.
If that table is partitioned, which ever row that is getting queried, then that specific partition which has the required data gets into memory.
Even if you only need 1 row from a partition of a table which has 1 billion records, that entire partition will get loaded either partially or fully.
In HANA we can never load 1 row alone into memory from a table.
2. Determine candidates
Check which tables are larger than 50G or have more than 1 billion records:
select a.table_name, (select string_agg(column_name,', ') from index_columns where constraint = 'PRIMARY KEY' and table_name = a.table_name group by table_name) "Primary Key Columns", case a.is_partitioned when 'TRUE' then (select LEVEL_1_TYPE || '(' || replace(LEVEL_1_EXPRESSION,'"','') || ')#' || LEVEL_1_COUNT from partitioned_tables where table_name = a.table_name) else 'No' end as "Current Partitioning", a.record_count "Rows", to_decimal(a.table_size/1024/1024/1024,2,2) "Size GB" from m_tables a where a.IS_COLUMN_TABLE = 'TRUE' and (a.record_count > 1000000000 or a.table_size/1024/1024/1024 > 50) order by a.table_name;
The output will display:
TABLE_NAME: Name of the table
Primary Key Columns: The columns on which the primary key is created
Current Partitioning: If the table is currently partitioned and the partition type, columns and number of partitions
Rows: Number of records in the table
Size GB: Size of the table in memory
3. Determine Partitioning Strategy
In order to choose an appropriate column for partitioning we need to analyze the table usage in depth.
3.1 Technical Tables
3.1.1 With recommendations
In case you can find exact partitioning recommendation on a specific table please follow the recommendation. Check SAP Note 2044468 – FAQ: SAP HANA Partitioning to find the latest information. Here only the most common tables are listed.
Use column with best value distribution and use same column for both tables if possible, in some cases OBJECTID for CDHDR and CHANGENR for CDPOS can be the best solution
CKMLCR, CKMLKEPH
HASH
KALNR
COBK, COEP
HASH
BELNR
COFV
HASH
CRID
DBTABLOG
HASH
LOGID
EDID4, EDIDS
HASH
DOCNUM
EQKT
HASH
EQUNR
IDOCREL
HASH
ROLE_A or ROLE_B
The column with better value distribution
JCDS, JEST
HASH
OBJNR
JVTLFZUO
HASH
VBELN
KEPH
HASH
KALNR
KONV
HASH
KNUMV
MATDOC
HASH
MBLNR
MBEW, MBEWH, MVER, MYMFT
HASH
MATNR
MSEG
HASH
MBLNR
PCL2, PCL4
HASH
SRTFD
RESB
HASH
RSNUM
RSEG
HASH
BELNR
SOC3
HASH
SRTFD
SRRELROLES
HASH
OBJKEY
STXL
HASH
TDNAME
SWWCNTP0, SWWLOGHIST
HASH
WI_ID
VBFA
HASH
SoH: VBELV S/4HANA: RUUID
3.1.2 Without recommendations
If there is no specific recommendation for partitioning a table by SAP (e.g. because this is not listed (2044468 FAQ: SAP HANA Partitioning) or the table is a customer specific table) please follow the approach as described in chapter “3.3 Other tables”.
If data volume is not significantly above 1 billion records per year
RANGE
RBUKRS
Only if there is a reasonable data distribution by company code and the expected data volume of the largest company code is not significantly above 1 billion records
Ask them which column(s) they query frequently and which column is always part of the where clause .
If they are not very clear on the same we can help them with the plan cache data.
3.3.2 Check M_SQL_PLAN_CACHE
With the help of below query we can get a list of queries that are to identify the where clause:
select top 10 upper(SUBSTR_AFTER(STATEMENT_STRING, 'WHERE')), EXECUTION_COUNT, TOTAL_EXECUTION_TIME from M_SQL_PLAN_CACHE where STATEMENT_STRING like '%TABLE%' and not STATEMENT_STRING like 'select upper(SUBSTR_AFTER(STATEMENT_STRING%' and not upper(SUBSTR_AFTER(STATEMENT_STRING, 'WHERE')) like '' and not upper(SUBSTR_AFTER(STATEMENT_STRING, 'WHERE')) like '%UNION%' and TOTAL_EXECUTION_TIME > 0 and EXECUTION_COUNT > 5 order by TOTAL_EXECUTION_TIME desc;
From the result you have to analyze the where clause and find a common pattern.
Let’s assume that table CDPOS is accessed mostly via MANDT and CHANGENR (This is by the way the case in many SAP customer systems), the solution would be to implement range-range multi-level partitioning for column MANDT (level 1) and column CHANGENR (level 2).
For sure some SQL statements will have to look into several partitions of one MANDT when CHANGENR is not used in the where-clause.
3.3.3 Join Statistics
Check the columns that are getting joined on this table. Use SQL script HANA_SQL_Statistics_JoinStatistics_1.00.120+ from OSS note 1969700 – SQL Statement Collection for SAP HANA and modify the SQL like below.
From the output, you can determine on which column joins are happening most and hence a HASH on this column will make the query runtime faster.
3.3.4 Enable SQL Trace
Enable SQL trace for specific table.
3.3.5 No specific range values
When there is no specific range values that are frequently queried and there is a case like most of the columns are used most of the times, a HASH algorithm will be a best fit.
It is similar to round robin partition, but data will be distributed according to the hash algorithm on their one or two designated primary key columns:
A Hash algorithm can only happen on a PRIMARY key field.
Do NOT choose more than 2 primary key field for HASH
Within the primary key, check for which row has maximum distinct records. That specific column can be chosen for re-partition.
To determine which primary key column can be chosen for re-partitioning, perform below steps.
Load the table fully into memory: LOAD TABLE ALL;
Select the Primary Key columns and the distinct records per column: select a.column_name, sum(b.distinct_count) from index_columns a, m_cs_columns b where a.table_name = 'TABLE' and a.constraint = 'PRIMARY KEY' and a.table_name = b.table_name and a.column_name = b.column_name group by a.column_name order by sum(b.distinct_count) desc;
3.3.6 NSE-based partitioning
Check if there is a column with a date-like format in the primary key. It might be a candidate for NSE based partitioning.
select distinct a.COLUMN_NAME, a.DATA_TYPE_NAME, a.LENGTH, a.DEFAULT_VALUE from TABLE_COLUMNS a, INDEX_COLUMNS b where a.TABLE_NAME = b.TABLE_NAME and a.COLUMN_NAME = b.COLUMN_NAME and replace(a.DEFAULT_VALUE,'0','') = '' and length(a.DEFAULT_VALUE) >= 4 and not a.DEFAULT_VALUE = '' and a.LENGTH between 4 and 14 and b.CONSTRAINT = 'PRIMARY KEY' and a.TABLE_NAME = 'TABLE';
This query only gives an indication. To be sure, query the table itself.
Run the following query and execute the output as the schema owner:
select 'select top 5 ' || string_agg(COLUMN_NAME,', ') || ' from ' || TABLE_NAME || ';' from INDEX_COLUMNS where CONSTRAINT = 'PRIMARY KEY' and TABLE_NAME = 'TABLE' group by TABLE_NAME
Output looks like:
select top 5 MANDT, KALNR, BDATJ, POPER, UNTPER from CKMLPP;
Execute the output-query and the result looks like:
In this case column LOGDATA contains a date format, but we need to convert this to the YEAR. This can be done by dividing LOGDATA by 10000 (from 8 characters to 4 character).
4. Determine Partitions
4.1 Ranges
In case of RANGE partitioning, to get an indication of the number of partitions and the number of rows per partition, execute the queries as mentioned in below chapters, depending on the column value that can be used.
Eventually some partition ranges can be combined.
The ranges (start and end), the number of records and the estimated size per partition will be displayed.
Note: these queries should be executed by the SAP Schema user!
4.1.2 Range on part of column
When a part of column can be used, the value has to be divided by the number of characters you want to remove.
This is the case when you want to use the YEAR from a string that contains YEAR, MONTH, DAY and TIME.
Note: Edit the query with the correct TABLE, with the chosen COLUMN and if required change the divider number. For example, if you want to remove 4 characters from the column, devide by 10000 (1 with 4 zeroes).
Example:
From 8 to 4 characters: divide by 10000 (4 zeroes)
From 8 to 6 characters: divide by 100 (2 zeroes)
Query:
select "Start", "End", "Rows", to_decimal("Est_Size_Gb"*"Rows"/1024/1024/1024,2,2) "Est_Size_GB" from (select to_decimal(COLUMN/RANGE,1,0)*RANGE "Start", to_decimal(COLUMN/RANGE,1,0)*RANGE+RANGE "End", count(*) "Rows", (select table_size/record_count from m_tables where table_name = 'TABLE') "Est_Size_Gb" from TABLE group by to_decimal(COLUMN/RANGE,1,0)*RANGE order by to_decimal(COLUMN/RANGE,1,0)*RANGE);
4.1.2 Range on entire column
The entire column value can be used as a partition.
This is the case when you want to use the YEAR from a string that equals the YEAR.
Query:
select "Start", "Rows", to_decimal("Est_Size_Gb"*"Rows"/1024/1024/1024,2,2) "Est_Size_GB" from (select COLUMN "Start", count(*) "Rows", (select table_size/record_count from m_tables where table_name = 'TABLE') "Est_Size_Gb" from TABLE group by COLUMNorder by COLUMN);
4.2 Determine number of partitions
In case of HASH partitioning, the number of partitions has to be determined.
A good sizing method is:
Keep the number of partitions per table low (maximum 8 partitions)
Maximum 100 – 200 million rows per partition (recommended).
This can be done with the following command.
The number of partitions can be determined with the following query, based on the number of records in the table, divided by 100 million (when too many partitions (> 8), divide by a higher value) or the size of the table in GB devided by 50:
select to_decimal(round(RECORD_COUNT/100000000,0,round_up),1,0) PART_ON_ROWS, to_decimal(round(table_size/1024/1024/1024/50,0,round_up),1,0) PART_ON_SIZE from m_tables where TABLE_NAME = 'TABLE';
5 Implementation
5.1 SQL Commands
The following commands can be used to perform the actual partitioning of tables:
Action
SQL Command
HASH Partitioning
ALTER TABLE TABLE PARTITION BY HASH (COLUMN) PARTITIONS X;
ROUND-ROBIN Partitioning
ALTER TABLE TABLE PARTITION ROUNDROBIN X;
RANGE Partitioning On part of column
ALTER TABLE TABLE PARTITION BY RANGE (COLUMN) (PARTITION 0 <= VALUES < 1000, PARTITION XXX <= VALUES < YYY, PARTITION OTHERS);
RANGE Partitioning On full column
ALTER TABLE TABLE PARTITION BY RANGE (COLUMN)(PARTITION VALUE = 1000, PARTITION VALUE = 2000, …, PARTITION OTHERS);
Multi-level (HASH/RANGE) Partitioning
ALTER TABLE TABLE PARTITION BY HASH (COLUMN1) PARTITIONS X, RANGE (COLUMN2) (PARTITION 0 <= VALUES < 1000, PARTITION XXX <= VALUES < YYY, PARTITION OTHERS);
Move partitions to other servers
ALTER TABLE TABLE MOVE PARTITION X TO 'server_name:3nn03' PHYSICAL;
Add new RANGE to existing partioned table On part of column
ALTER TABLE TABLE ADD PARTITION XXX <= VALUE < YYY;
Add new RANGE to existing partioned table On full column
ALTER TABLE TABLE ADD PARTITION VALUE = YYY;
Drop existing RANGE
ALTER TABLE TABLE DROP PARTITION XXX <= VALUE < YYY;
Adjust partitioning type
ALTER TABLE TABLE PARTITION …
Delete partitioning
ALTER TABLE TABLE MERGE PARTITIONS;
5.2 Automatic new partitions
There is an automatic way to add new partitions besides dynamic range partitioning by record threshold. Starting with SPS06 there is a new interval option for range partitions.
When a new dynamic partition is required, SAP HANA renames the existing OTHER partition appropriately and creates a new empty partition.
Thus, no data needs to be moved and the process of dynamically adding a partition is very quick.
With this feature you can use the following parameters to automatize the split of the dynamic partition based on the number of records.
Inifile
Section
Parameter
Default
Unit
Remark
indexserver.ini
partitioning
dynamic_range_default_threshold
10000000
rows
automatic split once reached row threshold
Indexserver.ini
partitioning
dynamic_range_check_time_interval_sec
900
sec
how often the threshold check is performed
Note: These threshold can be changed to meet the requirements. It can also be set per table, with: ALTER TABLE T PARTITION OTHERS DYNAMIC THRESHOLD 500000;
The partitioning columns need to be dates or numbers. Dynamic interval is only supported when the partition column type is TINYINT, SMALLINT, INT, BIGINT, DATE, SECONDDATE or LONGDATE. If no <interval_type> is specified, INT is used implicitly.
To check the DATA_TYPE of the selected column, execute the following query:
select COLUMN_NAME, DATA_TYPE_NAME, LENGTH from TABLE_COLUMNS where TABLE_NAME = 'TABLE' and COLUMN_NAME = 'COLUMN';
Examples:
Action
SQL Command
Quarterly new partition
ALTER TABLE TABLE PARTITION OTHERS DYNAMIC INTERVAL 3 MONTH;
Half yearly new partition
ALTER TABLE TABLE PARTITION OTHERS DYNAMIC INTERVAL 6 MONTH;
After 2 years new partition
ALTER TABLE TABLE PARTITION OTHERS DYNAMIC INTERVAL 2 YEAR;
5.3 Check Progress
To check the overall progress of a running partitioning execution, run the following query:
select 'Overall Progress: ' || to_decimal(sum(CURRENT_PROGRESS)/sum(MAX_PROGRESS)*100,2,2) || '%' from M_JOB_PROGRESS where JOB_NAME = 'Re-partitioning' and OBJECT_NAME like 'TABLE%';
To check the progress in more detail, run:
select TO_NVARCHAR(START_TIME,'YYYY-MM-DD HH24:MI:SS') START_TIME, to_decimal(CURRENT_PROGRESS/MAX_PROGRESS*100,3,2) || '%' "PROGRESS%", OBJECT_NAME, PROGRESS_DETAIL from M_JOB_PROGRESS where JOB_NAME = 'Re-partitioning' and OBJECT_NAME like 'TABLE%';
6 Aftercare
6.1 HASH Partitioning
For HASH Partitioning, regularly check the number of records per partition and consider repartitioning.
When repartitioning, choose the new number of partitions as a multiple or divider of current number of partitions.
6.2 RANGE Partitioning
For tables with RANGE partitioning, new partitions should be created regularly, when a new range is reached.
Old partitions which are not required anymore, can be dropped.
Next to that, checks need to be performed that not too many rows reside in the OTHERS partition.
To review if new partitions should be added to existing partitioned tables and if records are present in the “OTHERS” partition, execute the following query:
select a.table_name, replace(b.LEVEL_1_EXPRESSION,'"','') "Column", b.LEVEL_1_COUNT "Partitions", max(c.LEVEL_1_RANGE_MIN_VALUE) "Last_Range_From", CASE max(c.LEVEL_1_RANGE_MAX_VALUE) WHEN max(c.LEVEL_1_RANGE_MIN_VALUE) THEN 'N/A' ELSE max(c.LEVEL_1_RANGE_MAX_VALUE) END "Last_Range_To", (select record_count from m_cs_tables where part_id = b.LEVEL_1_COUNT and table_name = a.table_name) "Rows in OTHERS" from m_tables a, partitioned_tables b, table_partitions c where a.IS_COLUMN_TABLE = 'TRUE' and a.is_partitioned = 'TRUE' and b.level_1_type = 'RANGE' and a.table_name = b.table_name and b.table_name = c.table_name and b.LEVEL_1_COUNT > 1 group by a.table_name, b.LEVEL_1_EXPRESSION, b.LEVEL_1_COUNT order by a.table_name;
When the “Last_Range_To” column is a date-like column and the date-like partition is already or almost past, add a new partition.
For the tables that have non-zero values in column “Rows in OTHERS”, run the following check to determine the reason why they are in others and if extra partitions should be added.
Note: Edit the query with the correct TABLE and COLUMN and if required change the divider number. For example, if you want to remove 4 characters from the column, devide by 10000 (1 with 4 zeroes).
Example:
From 8 to 4 characters: divide by 10000 (4 zeroes)
From 8 to 6 characters: divide by 100 (2 zeroes)
select a."Start", a."End", case when b.part_id is not null then to_char(b.part_id) else 'OTHERS' end "Partition" from (select to_decimal(COLUMM/RANGE,1,0)*RANGE"Start", to_decimal(COLUMM/RANGE,1,0)*RANGE+RANGE "End" from TABLE group by to_decimal(COLUMM/RANGE,1,0)*RANGE order by to_decimal(COLUMM/RANGE,1,0)*RANGE) a left outer join (select part_id, case when level_1_range_min_value <> '' then to_decimal(level_1_range_min_value,1,0) else '0' end "Start", case when level_1_range_max_value <> '' then to_decimal(level_1_range_max_value,1,0) else '0' end "End" from table_partitions where table_name = 'TABLE') b on a."Start" >= b."Start" and a."Start" < b."End";
All these actions can be done with the commands as specified in chapter “5.1 SQL Commands”.
6.3 Partitioning Consistency Check and Repair
Once partitioning has been implemented, some consistency checks can and should be performed regularly.
To ensure consistency for partitioned tables, execute checks and repair statements, if required.
You can call general and data consistency checks for partitioned tables to check, for example, that the partition specification, metadata, and topology are correct.
If any of the tests encounter an issue with a table, the statement returns a row with details on the error. If the result set is empty (no rows returned), no issues were detected.
6.3.2 General check
Checks the consistency among partition specification, metadata and topology:
The number of first level partitions depends on number of records in the largest table of the table group respectively the TABLE_PLACEMENT configuration.
In all SAP BW on SAP HANA systems (single node and scale-out) maintain the table placement rules and the required SAP HANA parameters as follows and explained below.
Scale-out with 1 Index server coordinator + 1 active index server worker node
Scale-out with 1 Index server coordinator + 2 active index server worker nodes
Scale-out with 1 Index server coordinator + 3 or more active index server worker nodes
For systems up to and including 2 TB per node
010 = Single node (up to and including 2 TB)
020 = Scale-out (up to and including 2 TB per node) with 1 coordinator and 1 worker node
030 = Scale-out (up to and including 2 TB per node) with 1 coordinator and 2 worker nodes
040 = Scale-out (up to and including 2 TB per node) with 1 coordinator and 3 or more worker nodes
For systems with more than 2 TB per node
050 = Single node (more than 2 TB)
060 = Scale-out (more than 2 TB per node) with 1 coordinator and 1 worker node
070 = Scale-out (more than 2 TB per node) with 1 coordinator and 2 worker nodes
080 = Scale-out (more than 2 TB per node) with 1 coordinator and 3 or more worker nodes
Notes:
Scale-out configurations with less than two active index server worker nodes are not recommended. If the scale-out system uses <= 2 TB per node. See SAP Note 1702409 for details.
InfoCubes and classic/advanced DataStore Objects in scale-out systems are distributed over all nodes – including the coordinator – if the nodes are provided with more than 2 TB main memory. This optimizes memory usage of all nodes – including the coordinator. If this frequently leads to situations with excessively high CPU load on the coordinator, certain BW objects must be distributed to other nodes to reduce the CPU load.
With SAP HANA 1.0 SPS 12 and SAP HANA 2.0, a table for a BW object can have more partitions at the first level than there are valid locations (hosts) for this table, but only if the main memory of the nodes exceeds 2 TB. This is achieved by setting the parameter ‘max_partitions_limited_by_locations’ to ‘false’. The maximum number of partitions at the first level is limited by the parameter ‘max_partitions’. Its value is set depending on the number of hosts. Some operations in HANA use parallel processing on partition level. If there are more partitions, it is ensured that the CPU resources on larger HANA servers are used more efficiently. If this frequently leads to situations with excessively high CPU load on some or all HANA nodes, it may be necessary to manually adjust the partitioning rules (for some BW objects) to reduce the CPU load. In this context ‘manually’ means that a customer defined range partitioning at the second level must be adjusted, or additional, BW object-specific table placement rules must be created. Please note that manual changes to the partitioning specification of BW-managed tables at database level (via SQL statements) are not supported.
In scale-out systems with 1.5 or 2 TB per nodeand efficient BW housekeeping, the memory usage on the coordinator may be low because InfoCubes and DataStore Objects are not distributed across all nodes (as is the case for systems with more than 2 TB per node). In this scenario, it is not supported to use the rules for table placement for systems with more than 2 TB per node. Instead, you can check the option of identifying DataStore Objects (advanced)that are used as corporate memory, on which therefore little or no reporting takes place, and placing these objects on the coordinator node. This may be a workaround to make better use of the main memory on the coordinator node without causing a significant increase in the CPU load on the coordinator. DataStore objects (advanced) can be placed on the coordinator node with the aid of object-specific rules for table placement. These customer-specific rules for table placement must be reversed if this causes an overly high memory or CPU load on the coordinator node.
Prerequisites:
only for scale-out systems with 1.5 or 2 TB per node
only for DataStore Objects (advanced) not for classic DataStore Objects (due to the different partitioning specifications)
only for DataStore Objects (advanced) that are used as corporate memory i.e. DataStore Objects (advanced) without activation
sizing rules as documented in the attachment ‘advanced DataStore objects type corporate memory on coordinator node.sql’ must be respected
7.1.2 Maintain scale-out parameters
In SAP BW on SAP HANA scale-outsystems only, maintain the parameters recommended in SAP Note 1958216 for the SAP HANA SPS you use.
In an SAP HANA scale-out system, tables and table partitions are assigned to an indexserver on a particular host when they are created. As the system evolves over time you may need to optimize the location of tables and partitions by running automatic table redistribution.
Different applications require different configuration settings for table redistribution.
For systems running SAP BW on SAP HANA or SAP BW/4HANA – SAP HANA 2.0 SPS 04 and higher:
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET ('table_redist','balance_by_execution_count') = 'false' WITH RECONFIGURE COMMENT 'See SAP Note 1958216';
7.1.3 Check Table Classification
Make sure that all BW tables have the correct table classification.
For an existing SAP BW on SAP HANA system, you can check the table classification using the report RSDU_TABLE_CONSISTENCY and correct it if necessary. For information about executing the report, see SAP Note 1937062.
This note provides a file “rsdu_table_consistency_<version>.pdf”.
The report does not available on BW4/HANA systems. Please check the note 2668225 – Report RSDU_TABLE_CONSISTENCY is deprecated with SAP BW/4HANA.
Follow the instructions from the latest pdf file.
During the migration of an SAP BW system via SWPM, the report SMIGR_CREATE_DDL ensures that the correct table classification is set. Before using one of the two reports, implement the current version of the SAP Notes listed in the to SAP Note 1908075 attached file REQUIRED_CORRECTION_NOTES.XLSX. Filter the list in accordance with your release and Support Package, and implement the notes in the specified sequence using transaction SNOTE.
In a heterogeneous system migration using the Software Provisioning Manager (SWPM), you also have to implement these SAP Notes in the source system before you run the report SMIGR_CREATE_DDL. If you perform the migration using the Database Migration Option (DMO), you do not have to implement the SAP Notes in the source system. Instead, you should implement the SAP Notes in the target system or include a transport with those SAP Notes when the DMO prompts you to do so.
The operation of the report is strongly divided into two parts: Check and Repair, which can’t be combined in a single run!
7.1.3.1 Check tables for inconsistencies
The check for inconsistencies is pure read-only for both HANA and BW.
In Tx SE38, run report RSDU_TABLE_CONSISTENCY.
Select “Store issues” and select all checkboxes:
Run the report.
7.1.3.2 Display table consistencies
When you run the report with “Show issues in GUI”, the issues will be displayed.
When you run the report with “Store issues”, the issues will be displayed when executing in foreground.
An example is shown below:
When the report has been executed in the background with option “Store issues”, you can rerun the report and choose “Show”.
The issues only will be displayed:
When you double click on an issue, the details will be displayed:
If you want to repair the issue, select the line and click Save and go back (number of selected items will be displayed):
Please keep I mind, that the displayed columns in the table at different check-scenarios differ. Usually following information will be provided with all scenarios:
Exception: Provides information on the severity of the issue. Red: Inconsistency found or error occurred. There is a need of repair, or failure must be eliminated with further tools Yellow: Warning of unexpected results, but there’s no immediate action needed Green: additional info – no action required
Status: indicates the current state of the issue regarding a possible repair action:
OK: no error or inconsistency – just info
REPAIRABLE: this inconsistency should be repairable within RSDU_TABLE_CONSISTENCY
IRREPARABLE: an inconsistency or an error occurred during the check which can’t be solved within the report. Additional actions or analysis needed to solve this issue
FAILED: an inconsistency, in which a repair attempt failed. Refer the entry in column ‘Reason’.
REPAIRED: indicates that the issue was successfully repaired.
Type: shows the type of table like Fact tables, PSA etc.
Reason: This describes the reason why a table is classified as inconsistent. For errors that have occurred during the check, the error text is shown. Some frequently occurring errors are described in section 6 (“Frequently obtained error messages and warnings” at page 14). 5. Table: shown the table name
BW Object: shows the BW Object (InfoCube name, DSO name etc.) the table is liked with.
7.1.3.3 Repair table inconsistencies
A user must first select the issues to be repaired before it can start the repair sequence. Repairing an inconsistence always performs a write action on HANA table properties – the repair will never chance any BW metadata!
Run report RSDU_TABLE_CONSISTENCY again and select repair (with the number of items selected).
Execute the program in background.
Check in SM37 the job log and spool output:
Rerun the check report from chapter “7.1.3.1 Check tables for inconsistencies”. All should be green now!
These actions have to be executed as SYSTEM user in the HANA Tenant DB, preferably from the HANA Studio.
In HANA Studio, go to the Administration Console and select tabs “Landscape” and then “Redistribution”.
Note: These steps can take a long time and should be executed in quiet windows.
7.1.5.1 Save current Table Distribution
Save the current table distribution.
7.1.5.2 Optimize Table Distribution
In the Administration Console, tabs “Landscape” and then “Redistribution”, select the “Optimize Table Distribution” and click Execute. Keep the default settings and click Next. List of the newly, to be implemented, Table Distribution is displayed. Click Execute to start the actual Table Redistribution. The progress can again be followed.
7.1.5.3 Optimize Table Partitioning
In the Administration Console, tabs “Landscape” and then “Redistribution”, select the “Optimize Table Partitioning” and click Execute. Keep the default settings and click Next. A list of the newly, to be implemented, Table Partitioning is displayed. Click Execute to start the actual Table Repartitioning. The progress can again be followed.
SAP audit log can have high volumes. For that reason most companies use file system to write the audit log.
If you have a SIEM (Security Information and Event Management) solution (like SAP enterprise treat detection, Onapsis, SecurityBridge and many more) that needs to scan your audit log, the best way is to store the audit log data in the database. This ensures that the audit log can be analyzed at high performance.
The unfortunate side of storing in the database means that the audit log table can grow quite fast, which is expensive especially if you run on HANA.
To counter the high costs, you can unload the data from the database into archive files for SAP audit log. This means that your most recent data is in the database for fast analysis, and your history is on cheaper disc storage.
This blog will explain how to archive production order data via object BC_SAL. Generic technical setup must have been executed already, and is explained in this blog.