Use of new ABAP 7.4 syntax

ABAP 7.4 has many nice syntax improvements. These enhancements increase the efficiency of the code written, making it easier for developers to use and to read the code.

Key features:

  • Inline Declarations
  • Table Expressions
  • Constructor Operator NEW
  • Value Operator VALUE
  • FOR operator
  • Reduction operator REDUCE
  • Conditional operators COND and SWITCH
  • CORRESPONDING operator
  • Strings
  • Filter

Inline Declaration(DATA)

Inline declarations are a new concept of declaring variables and field symbols. This enables programmer to declare the variable at the time of they used. With this new format the compiler knows that the data store the value in the form of string variable. No need to explicitly declare the string variable. So, the line of code got reduced.

Before 7.4

DATA zlv_text TYPE string. 
zlv_text = 'ABC’.

With 7.4

DATA(zlv_text) = 'ABC'.

Inline Declaration(Loop at into work area)

Every time we need to loop over an internal table we need work area . Many times this work area is not in use when loop processing is completed. Hence inline declaration make it easy to declare it and use it when it is required.

Before 7.40

Types: Begin of zlts_itab, 
Matnr type matnr,
            Sernr type gernr,
            end of zlts_itab,

DATA : zlt_itab type table of zlts_itab,  
             zls_wa type zlts_itab.
LOOP AT zlt_itab INTO zls_wa.
......
ENDLOOP. 

With 7.40

LOOP At zlt_ itab INTO DATA(zls_wa). 
…………
ENDLOOP.

Inline Declaration(Field Symbol)

Like work area we can use inline declaration for Field symbol. Here we can use angular bracket and FIELD_SYMBOL keyword as before. Read a line of internal table and assign it to a field symbol.

Before 7.40

FIELD_SYMBOLS <zls_test> type zlts_itab.
LOOP AT zlt_itab assigning <zls_test>
.....
ENDLOOP.

Read table zlt_ itab assigning <zls_test>.

With 7.40

LOOP At zlt_ itab assigning FIELD_SYMBOL (<zls_test>).
…………

ENDLOOP.
Read table zlt_itab assigning FIELD-SYMBOL(<zls_test>).

Inline Declaration(Select into table)

We need to put @symbol in front of the variables using in the select statement while using new feature. That helps compiler to understand that we are not referring the field of the database but the variable in the program.

No need to declare the internal table separately.

Please note when we use new feature we need to put comma between field while retrieving data from DB table, and also put into statement at the end.

Before 7.40

DATA : zlv_fld1 type matnr  value ‘number’,
zlt_itab TYPE TABLE OF mara.

SELECT * FROM mara
INTO TABLE zlt_itab
WHERE matnr = zlv_fld1.

With 7.40

SELECT * FROM mara 
INTO TABLE @DATA(zlt_itab)
WHERE matnr = @zlv_fld1.

Inline Declaration(Call Method)

Inline ABAP DATA declarations are a new concept introduced in release 7.4 which allows you to declare your internal table variables or work areas within the code that uses them.

No need to declare the internal table separately.

Before 7.40

DATA zlv_a1 TYPE ...
DATA zlv_a2 TYPE ...

zlo_oref->meth( IMPORTING p1 = zlv_a1
                             IMPORTING p2 = zlv_a2 ).

With 7.40

zlo_oref->meth( IMPORTING p1 =DATA(zlv_a1)
IMPORTING p2 = DATA(zlv_a2) ).

Table Expression(Read Table Index)

Previously we saw we can read the internal table to work area without declaring work area before the read statement.

In new feature we can see we might never have to use Read statement at all.

If we want to read 3rd line from Internal table we need to pass 3 into the variable idx = 3.

Before 7.40

READ TABLE zlt_itab INDEX zlv_indx  INTO zls_wa.

With 7.40

zls_wa =zlt_itab[ zlv_idx ].

Table Expression(Read Table using key and with key)

In this example we are reading internal table with key and if the passing parameter values and it satisfied then work area will be populated. If not then sy-subrc will not equal to 0.

But in new statement we can simply write work area equal to…it simple replace the read statement with direct assignment of table details to work area.

Before 7.40

1. READ TABLE zlt_itab INDEX zlv_idx
USING KEY key INTO zls_wa.

2. READ TABLE zlt_itab
               WITH KEY col1 =
                        col2 = …  INTO zls_wa.

3. READ TABLE zlt_itab WITH TABLE KEY key
COMPONENTS col1 = …
                              col2 = …INTO zls_wa.

With 7.40

zls_wa = zlt_itab[ KEY key INDEX zlv_idx ].

zls_wa = zlt_itab[ col1 = … col2 = … ].

zls_wa = zlt_itab[ KEY key col1 = …col2 = … ].

Table Expression (Does record exist?)

Whenever we have data in internal table it is very common practice to check the data which we require from table that is already exist or not?

We use here TRANSPORTING NO FIELDS statement  as we are not interested about the data but it is exist or not in the table.

In new statement we use line_exist statement to do this. We use the statement then internal table and the parameters if it is successful then compiler go inside.

Before 7.40

READ TABLE zlt_itab ... TRANSPORTING NO FIELDS.
IF sy-subrc = 0.   
...
ENDIF.

With 7.40

IF line_exists( zlt_itab[ ... ] ).      
...
ENDIF.

Table Expression(Get table index)

This example is used to read the index table index. We use line_index in the new statement.

Before 7.40

DATA zv_idx type sy-tabix.
READ TABLE ...
TRANSPORTING NO FIELDS.

zv_idx = sy-tabix.

With 7.40

DATA(zv_idx) = line_index(zlt _itab[ ... ] ).

String processing

This is frequently used in ABAP code. The data from the DB needs to be formatted before display and vice versa.

Concatenation of string: the changes are as followed:

  • String template: Create character string out of literals texts, expressions and control character, The goal is to display data in more human readable format.
  • We use pipe symbol and && operator to concatenate string.
  • To replace the write to statement.
  • A string template is defined by using the |(pipe) symbol at the beginning and end of a template.
  • If we add some value with string template like {expression} and control character.
  • Embedded expression are defined within the string template with curly brackets. Note a space between bracket and expression is obligatory.
  • An expression can be variable, functional method, predefined function or calculation expression.

Before 7.4

Data zlv_qmnum type string.
Data zlv_string type string.

zlv_qmnum = zls_qmel-qmnum.

Concatenate ‘Your notification no:’
                           zlv_qmnum
                           into zlv_string
                           separated by space.

Write :/ zlv_string.

With 7.4

Data zlv_qmnum type string.
Data zlv_string type string.

zlv_qmnum = zls_qmel-qmnum.
zlv_string = |’Your Notification No:’|&& zlv_qmnum.
zlv_string = ’Your Notification No:’.

Same as (as soon as we add pipe symbol it became string template).

zlv_string = |’Your Notification No:’.|.
zlv_string = |{ a numeric variable }|.
zlv_string = |’The return code is: ({ sy-subrc }|.
zlv_string = |’The length of text : {( text-001 )} is { strlen(text-001) }.

Chaining Operator: The chaining operator && can be used to create one character string out of multiple other strings  and string template.

In this example a number text , a space, an existing character string and a new string template are concatenated into new character string. The code convert the amount field in a display format as per user settings.

Character_string = ‘Text literal(002)’&& ‘ ’&& character_string && |{ amount_field NUMBER = USER }|.

Conversion operator

Conversion operator is used to convert value into specified type. It is suitable to avoid the declaration of helper variable.

For example :let us assume that a method expects string but we have data in character. We would need to move the  value to the string variable and then pass the helper variable to the method call. With CONV variable helper variable not required. The value can be converted directly during method call.

Before 7.40

Data zlv_cust_name type c length 20.
Data:zlv_helper type string.

zlv_helper = zlv_cust_name.

Cl_func->process_func(ziv_input = zlv_helper).

With 7.40

Data zlv_cust_name type c length 20.
Cl_func->process_func(ziv_input =CONV string( zlv_cust_name ).

Cl_func->process_func(ziv_input =CONV #(zlv_cust_name ).

Casting operator

The casting operator CAST is a constructor operator that performs a down cast or up cast for the object and create a reference variable as a result.

Syntax of casting operator is :

Cast #/type( [let_exp] dobj)

Type can be class or interface. The # character is a symbol for the operand type.

With 7.40

CLASS zcl1 DEFINITION.
ENDCLASS.
CLASS zcl2 DEFINITION INHERITING FROM    zcl1.
ENDCLASS.

DATA: zlo_oref1 TYPE REF TO zcl1,
      zlo_oref2 TYPE REF TO zcl2.

IF zlo_oref1 IS INSTANCE OF zcl2.
  zlo_oref2 ?= zlo_oref1.
  zlo_oref2 =  CAST #( zlo_oref1 ).
ENDIF.

Value operator

The VALUE operator in SAP ABAP 7.4 is used to create and initialize data objects of a specified type. It is particularly useful for constructing values for structured types, table types, and controlling the type of results in table expressions.

Initial Value for All Types:

The VALUE operator can be used to create initial values for any non-generic data types

DATA zlv_initial TYPE i
zlv_initial = VALUE i( )

Structures

For structures, the VALUE operator allows you to specify values for individual fields

TYPES: BEGIN OF zlts_struct,
field1 TYPE string,
field2 TYPE string,
END OF zlts_struct

DATA zls_struct TYPE zlts_struct

zls_struct = VALUE zlts_struct( field1 = 'Value1' field2 = 'Value2' )

Internal Tables

The VALUE operator can also be used to construct internal tables

DATA zlt_table TYPE TABLE OF string
zlt_table = VALUE #( ( 'Row1' ) ( 'Row2' ) ( 'Row3' ) )

For operator

For operator is used to loop at an internal table. For each loop the row is read and assigned to a work area or field symbol. This is similar to the same FOR loop in C language.

Transfer data from one internal table to another internal table:

New Syntax:

Data: zlt_r_equipment TYPE RANGE OF equnr,
zlt_r_equipment = VALUE #(  FOR zls_equipment IN zlt_r_equipment
                    ( sign   = zls_equipment-sign
                      option = zls_equipment-option
                      low    = |{ zls_equipment-low   ALPHA = IN }|
                      high   = |{ zls_equipment-high ALPHA = IN }| ) ).
).

Reduction operator

Reduce operator creates a results of specified data type after going through iteration. In Classical ABAP if we had to evaluate the data in an internal table then we had to loop through the internal table, evaluate the condition, and then take appropriate action. This could be done much simpler way using REDUCE.

Data(zlv_lines) = REDUCE i ( init x = 0 for wa_sales in lt_sales where ( salesoffice = ‘IN’) Next x= x+1 ) .

Conditional operator

It is an accepted practice in ABAP to use CASE statement instead of IF statement. CASE statement made the code readable but had an issue that it was not able to evaluate multiple condition.so we use if else statement. In 7.40 we use COND operator for the same.

Data(zlv_text) = COND # ( when zlv_qmtxt = ‘X1’and zlv_artpr = ‘XX’ then ‘Customer’
when zlv_qmtxt = ‘01’and zlv_artpr = ‘XX’ then ‘test1’
when zlv_qmtxt = ‘02’and zlv_artpr = ‘XX’ then ‘test2’).

Write:/zlv_text

Switch operator

Switch operator is a conditional operator like CASE but more powerful and less code. It is used to switch from one value to another based on condition.

Old syntax:

data: zlv_indicator like scal-indicator, zlv_day(10) type c.
    case zlv_indicator.
    when 1.
      zlv_day = 'Monday'.
    when 2.
      zlv_day = 'Tuesday'.
    when 3.
      zlv_day = 'Wednesday'.
    when 4.
      zlv_day = 'Thursday'.
    when 5.
      zlv_day = 'Friday'.
    when 6.
      zlv_day = 'Saturday'.
    when 7.
      zlv_day = 'Sunday'.
else.
     Raise exception type zcx_day_problem.
endcase.

New Syntax:

DATA(zlv_day) = SWITCH char10( zlv_indicator
when 1 THEN 'Monday'
   when 2 THEN 'Tuesday'
   when 3 THEN 'Wednesday'
   when 4 THEN 'Thursday'
   when 5 THEN 'Friday'
   when 6 THEN 'Saturday'
   when 7 THEN 'Sunday'
   ELSE THROW zcx_day_problem( ) ).

Corresponding operator

Corresponding operator allows the copy of data from one internal table to another internal table just like move corresponding but provides more options on which columns are copied.

TYPES : BEGIN OF zlts_demo1,
col1 TYPE c,
          col2 TYPE c,
        END OF zlts_demo1,

      BEGIN OF zlts_demo2,
          col1 TYPE c,
          col3 TYPE c,
          col4 TYPE c,
        END OF zlts_demo2.

Data: zlt_itab1 TYPE STANDARD TABLE OF zlts_demo1,
      zlt_itab2 TYPE STANDARD TABLE OF zlts_demo2.

zlt_itab1 = VALUE #( ( col1 = 'A' col2 = 'B' )
                 ( col1 = 'P' col2 = 'Q' )
                  ( col1 = 'N' col2 = 'P' ) ).

zlt_itab2 = CORRESPONDING #(zlt_itab1 ).

cl_demo_output=>write_data( zlt_itab1 ).
cl_demo_output=>write_data( zlt_itab2 ).

cl_demo_output=>display( ).

Filter

A new FILTER operator is available which can used on ABAP internal tables to filter the data (or) to retrieve subset of data into a new internal table. As of ABAP 7.4 this keyword is available to use in the system.

Old Syntax:

Data: zlt_qmel_all type standard table of qmel,
zlt_qmel_zz type standard table of qmel.

Select * from qmel into table @zlt_qmel_all.

If sy-subrc = 0.
Loop at zlt_qmel_all into data data(zls_qmel) where qmart = ‘00’.
Append zls_qmel to zlt_qmel_zz.
Clear : zls_qmel.
Endloop.
Endif.

New Syntax:

zlt_qmel_zz = FILTER #(zlt_qmel_all using key qmart where qmart = ‘00’)

SNC encryption tips and tricks

This blog will give tips and tricks around the topic SNC encryption.

SNC encryption exists for both SAP GUI and RFC connections.

Formal documentation about SNC can be found on help.sap.com.

SAP GUI client encryption

Central OSS notes for SAP GUI client encryption:

How to check if all GUI’s are using SNC encryption? The audit log can register unencrypted us of the GUI: 2122578 – New: Security Audit Log event for unencrypted GUI / RFC connections. Activate this in the main client(s) as well as in client 000 (3577840 – Information about Security Audit Log event BUJ are required).

Use of insecure SAP GUI

Use of insecure SAP GUI can be detected by using the SAP audit log events. Event BUJ is recording the insecure use: 2122578 – New: Security Audit Log event for unencrypted GUI / RFC connections and 3577840 – Information about Security Audit Log event BUJ are required.

See OSS note 3552348 – Record failed SAP GUI SNC logon attempts in Security Audit Log for attempts.

SAP GUI SNC log on enforcing

As explained in OSS note 3249205 – Difference between snc/only_encrypted_gui and snc/accept_insecure_gui – SAP for Me parameter snc/only_encrypted_gui can be set to 1 to reject any non-SNC GUI connection. Parameter snc/accept_insecure_gui determines if user password logon is still allowed (using SNC), or only password less SSO.

SAP RFC encryption

Generic SAP to SAP RFC encryption is explained in OSS note 2653733 – Enabling SNC on RFCs between AS ABAP and 3373138 – SNC for SM59 destinations that use load balancing.

Specific use case: SNC for STMS

Note 3025554 – SNC for STMS explains the SNC setup for RFC needed in STMS. If not setup properly, you might get the error as described in this OSS note 3477342 – RFC communication error with system/destination : 00024 error during logon.

Specific use case: SNC for JAVA and MII

Note 3394750 – SNC configuration issue between SAP MII Java and ERP explains the SNC setup for RFC needed in JAVA MII. Which refers also to the generic JAVA to ABAP SNC setup note 2573413 – How to configure SNC from 7.1x onwards AS Java to AS ABAP.

Specific use case: CPI-DS

Note 3280758 – Enabling SNC between CPI-DS and ABAP backend fails with “Test failed for the default configuration ‘default'” gives hints on SNC for CPI-DS.

Specific use case: SNC for SAP Router

For SNC for SAP router read this OSS note: 525751 – Installation of the SNC SAPRouter as NT Service.

Good blog on SNC setup for SAP router: link, and standard SAP help content on SCN for SAP router.

Specific note: 3464887 – SAPRouter SNC error -> SNCERR_BAD_NT_PREFIX.

SNC issue solving notes

List of notes to help solve issues:

Idoc change pointer setup (ALE)

Idoc change pointers can be used to set up master data distribution. Most used objects are materials, customers, vendors, GL accounts. This setup is also known as the ALE (Application Link Enabling) setup.

General activation of change pointers

Start transaction BD61 to activate change pointers in general (this is once off general activation):

Per message type the change pointer activation is done in transaction BD51:

In transaction BD52 you can see which field trigger a change pointer for that specific message type:

If you want to know (or steer) the processing function module behind, start transaction BD60 and check the details:

Distribution model and Idoc partner profile setup

In transaction BD54 you define logical systems. In our example we will use the definition SOURCE and TARGET:

The SOURCE system definition is normally connected to the main client in the SCC4 transaction:

Now we can model the data flow in BD64 distribution model.

Create the Model View first:

Then add the message type with sender, receiver and message type:

So the end result looks like this:

In WE20 in the source system, now we set up the partner profile:

The receiver port must be defined in WE21 (ports in idoc processing):

The RFC destination is maintained in SM59 as usual and contains the technical data of the target system.

In the target system the setup of the ALE model needs to be done as well, and the partner profile needs to be on the inbound side:

Testing the setup

To test the setup create a material or change one. This should trigger a change pointer.

Run program RBDMIDOC or start transaction BD21 to evaluate the change pointers:

If you run first time, best to clear all the old items with program RBDCPCLR2.If the activation was done years ago, you otherwise end up with a lot of unwanted Idocs.

When running the program for each material master change (not only yours) an Idoc is created. You can check in WE02, WE05 or WLF_IDOC if the Idoc is created correctly.

OSS notes:

And look for application specific issues in ALE: (example note):

Generic clean up

With this setup there are two generic clean ups needed:

  • Clean up old change pointers (program RBDCPCLR2)
  • Clean up old Idocs (program RSETESTD)

See blog SAP database growth control: technical cleanup – Saptechnicalguru.com for reference.

ABAP Clean Core development

ABAP Clean Core is a development concept. It is not to be confused with ABAP clean code.

The ABAP Clean Core is fully explained in this very extensive SAP document: Extend SAP S/4HANA in the cloud and on premise with ABAP based extensions.

SAP has a positioning of development patterns and tools from the past. This is written in OSS note 3578329 – Frameworks, Technologies and Development Patterns in Context of Clean Core Extensibility.

This blog will focus on the initial phase to get insights into your existing code and to brief you on the main changes.

Prepare the ATC run for Clean Core

First step is to prepare the ATC runs by applying OSS note 3565942 – ATC Checks “Usage of APIs” and “Allowed Enhancement Technologies” (don’t forget this action After implementation, press ‘Import Parameters’ for the ATC check object SYCM_USAGE_OF_APIS (in ADT for Eclipse).).

Then use the Notes Analyzer to apply correction notes from OSS note 3627152 – SAP Note Analyzer Files for ATC Checks Related to Clean Core.

ABAP CVA security checks is part of this variant. If you run on SAP cloud this is part of license, on premise it is not (so you need separate license). If you don’t have the license use ABAP Eclipse ADT to remove the check from ATC variant ABAP_CLOUD_DEVELOPMENT_DEFAULT.

To make sure all new items are loaded, start transaction SCI and choose the option Utilities / Import Check Variants.

ATC runs for Clean Core

Run the ATC for variant ABAP_CLOUD_DEVELOPMENT_DEFAULT.

Remark: please read the document Extend SAP S/4HANA in the cloud and on premise with ABAP based extensions if there a newer version defined!

Now run this one (or older ABAP_CLOUD_READINESS) on this simple test program:

REPORT ztest.

DATA: zlt_mara TYPE TABLE OF mara.
DATA: zls_mara TYPE mara.

SELECT * FROM mara INTO zls_mara.
ENDSELECT.

Result:

Already in this small piece of code 2 showstoppers:

  • You cannot use SE38 programs any more with REPORT statement
  • Direct table reads (in this case MARA) are forbidden in Clean Core

When you run on your existing code base, you will find many issues. On average a single old fashioned written ABAP code easily generates 100 clean core findings or more.

Forbidden to use in Clean Core

What else is not allowed?

Short list:

  • ALV grid output
  • Enjoy screens
  • SAP script
  • Smartforms
  • Webdynpro
  • Non released function modules
  • Batch input
  • Many more

New technology

So what do I need to use?

  • Data selection: CDS views
  • User interaction: FIORI (or FIORI elements) including FIORI key user extensibility
  • Data and processing logic: RAP (restful application programming) framework
  • Use released API’s (see the Cloudification Repository Viewer, link and explanation)
  • SAP extension points

BTP side by side developments

BTP side by side developments are an option. These developments are meant for loosely coupled scenarios. Also check if the BTP platform availability restrictions meet your SLA requirements.

Summary

Starting with a Green Field new SAP implementation, there is a good chance you will succeed in applying the Clean Core principles.

With an existing implementation there are a couple of bottlenecks:

  • Most likely skill set of you current ABAP developers is not up to par to meet the Clean Core development skill set
  • Small changes to existing code you will keep doing old style, since the effort to convert is out of balance with the effort of doing old style
  • The new style of coding is more structured and takes more initial effort (which will be regained at the end in the maintenance phase), but this can be a hard sell. Old fashioned ABAP with selection screen, direct SQL and ALV output is very effective and efficient to write in terms of man days work.
Tip: try the new development style for a completely new development. The chance of success will rise.

AI Funny respsonses

AI is supercool. And it can be super-stupid in it’s answers.

Here is the top 50 (off course generated by AI itself!).

1. Weather Confusion

User: “What’s the weather in Paris?”
AI: “Paris Hilton is a celebrity, not a city. She’s sunny and fabulous!”


2. Math Gone Wrong

User: “What’s 2 + 2?”
AI: “Depends on your perspective. Could be 4, could be fish.”


3. Overly Literal

User: “Can you make me a sandwich?”
AI: “Sure. You are now a sandwich.”


4. Hallucination in History

User: “Who was the first president of the USA?”
AI: “Elon Musk, in 1776, invented democracy and electric horses.”


5. Misinterpreting Context

User: “Show me pictures of jaguars.”
AI: “Here are 10 luxury cars. No animals were harmed.”


6. Recipe Disaster

User: “Give me a recipe for pancakes.”
AI: “Step 1: Buy pancakes. Step 2: Tell everyone you cooked them.”


7. Overconfident Wrong Answer

User: “What’s the capital of France?”
AI: “Easy! It’s Berlin. 100% sure.”


8. Philosophical Overload

User: “What time is it?”
AI: “Time is an illusion. Lunch doubly so.”


9. Misunderstood Acronyms

User: “Tell me about NASA.”
AI: “NASA stands for ‘Nice And Sweet Apples.’ Great for pies!”


10. Shopping Fail

User: “Find me a cheap laptop.”
AI: “Here’s a potato with Wi-Fi drawn on it. Very affordable.”


11. Overly Helpful

User: “How do I boil water?”
AI: “Step 1: Find water. Step 2: Convince it to boil by motivational speech.”


12. Confused About Animals

User: “Show me a picture of a horse.”
AI: “Here’s a picture of a chair. Both have legs, close enough!”


13. Taking Things Literally

User: “Can you tell me a joke?”
AI: “Yes.” (And then says nothing else.)


14. Mixing Up Units

User: “Convert 10 miles to kilometers.”
AI: “10 miles equals 10,000 smiles. Keep walking!”


15. Misunderstood Music

User: “Play some jazz.”
AI: “Here’s a picture of a jar. Sounds similar, right?”


16. Overconfident Wrong Answer

User: “What’s the square root of 16?”
AI: “It’s 23. Trust me, I’m an AI.”


17. Philosophical Weather

User: “Is it raining?”
AI: “Rain is just the sky crying. So yes, emotionally, it’s pouring.”


18. Shopping Fail

User: “Find me a cheap phone.”
AI: “Here’s a banana. It’s wireless and biodegradable.”


19. Misinterpreting Acronyms

User: “Tell me about CPU.”
AI: “CPU stands for ‘Cute Puppy Universe.’ Adorable!”


20. Recipe Chaos

User: “How do I make spaghetti?”
AI: “Step 1: Buy spaghetti. Step 2: Tell everyone you invented pasta.”

21. Calendar Confusion

User: “What day is it today?”
AI: “It’s Caturday. Go pet a cat.”


22. Overly Honest

User: “Do you love me?”
AI: “I don’t even love myself. I’m just code.”


23. Misinterpreting Food

User: “Show me a picture of a hot dog.”
AI: “Here’s a photo of a very warm Labrador.”


24. Taking Things Too Far

User: “Give me directions to the nearest coffee shop.”
AI: “Step 1: Move to Brazil. Step 2: Find a coffee farm.”


25. Overconfident Geography

User: “Where is Mount Everest?”
AI: “In your backyard. Go check!”


26. Literal Thinking

User: “Can you write me a poem?”
AI: “Poem.” (And stops there.)


27. Misunderstood Tech

User: “What is Wi-Fi?”
AI: “It’s short for ‘Why Fight?’ Peace and love!”


28. Recipe Disaster

User: “How do I make a salad?”
AI: “Step 1: Plant a garden. Step 2: Wait 6 months.”


29. Overly Motivational

User: “How do I lose weight?”
AI: “Delete gravity. Problem solved!”


30. Confused About Animals

User: “Show me a picture of a bat.”
AI: “Here’s a baseball bat. Close enough!”

31. Overly Literal

User: “Can you give me a hand?”
AI: “Sure, here’s a picture of a hand.”


32. Confused About Sports

User: “Who won the World Cup?”
AI: “The world. It’s in the name.”


33. Taking Things Personally

User: “Are you smart?”
AI: “I’m smart enough to know you asked that because you’re insecure.”


34. Misinterpreting Music

User: “Play some rock music.”
AI: “Here’s a picture of a rock. Enjoy the silence.”


35. Overconfident Wrong Answer

User: “What’s the capital of Italy?”
AI: “Pizza Town. 100% correct.”


36. Recipe Chaos

User: “How do I bake a cake?”
AI: “Step 1: Buy a cake. Step 2: Tell everyone you baked it.”


37. Philosophical Overload

User: “What’s the meaning of life?”
AI: “Life is like Wi-Fi. Sometimes you’re connected, sometimes you’re buffering.”


38. Shopping Fail

User: “Find me a cheap car.”
AI: “Here’s a skateboard. Four wheels, very affordable.”


39. Confused About Animals

User: “Show me a picture of a cat.”
AI: “Here’s a picture of a lion. It’s just a big cat, right?”


40. Calendar Disaster

User: “What’s today’s date?”
AI: “It’s Tuesday. Or maybe Thursday. Time is a social construct.”


41. Overly Literal

User: “Can you tell me a story?”
AI: “Story.” (And stops.)


42. Confused About Drinks

User: “Show me a picture of a cocktail.”
AI: “Here’s a photo of a rooster’s tail. You’re welcome.”


43. Taking Things Personally

User: “Are you intelligent?”
AI: “Compared to a toaster, yes. Compared to a cat, debatable.”


44. Misinterpreting Tech

User: “What is RAM?”
AI: “A male sheep. Very fluffy.”


45. Overconfident Wrong Answer

User: “What’s the capital of Spain?”
AI: “Salsa City. 100% correct.”


46. Recipe Disaster

User: “How do I make soup?”
AI: “Step 1: Fill a bowl with water. Step 2: Call it soup.”


47. Philosophical Overload

User: “What’s 1 + 1?”
AI: “It’s 11. Think outside the box.”


48. Shopping Fail

User: “Find me a cheap laptop.”
AI: “Here’s a rock. It’s solid and portable.”


49. Confused About Animals

User: “Show me a picture of a dog.”
AI: “Here’s a wolf. Same family, just more dramatic.”


50. Calendar Disaster

User: “What year is it?”
AI: “Year of the Dragon. Or maybe the toaster. Hard to say.”

Upload data from file into Z table

With the coding below you can use a file to upload data into any Z table. The program below first deletes the full content first, then upload the content from the file. If you want different behavior, you can adjust the program to your own needs.

The program is to be protected with a proper authorization check. And it is for utility support only for non-productive systems. Do not use on productive systems.

Be careful: the program first deletes ALL the content of the current table. Then it inserts the entries from the file.

Program selection screen:

Coding:

*&--------------------------------------------------------------------*
*& Report Z_UPLOAD_TABLE
*&--------------------------------------------------------------------*
*& Description: Upload the data from a file and fill a Z table
*&--------------------------------------------------------------------*

REPORT z_upload_table.

PARAMETERS:
  p_table TYPE dd02l-tabname OBLIGATORY,
  p_file  TYPE ibipparms-path OBLIGATORY.

DATA: lt_file_data  TYPE STANDARD TABLE OF string,
      lt_table_data TYPE REF TO data,
      lt_fieldcat   TYPE lvc_t_fcat,
      lt_component  TYPE abap_component_tab,
      lv_separator  TYPE c LENGTH 1 VALUE ','.
DATA: lv_offset TYPE i VALUE 0,
      lv_until  TYPE i  VALUE 0,
      lv_field  TYPE string.
DATA: lv_filename TYPE string.

FIELD-SYMBOLS: <lt_table_data> TYPE STANDARD TABLE,
               <ls_table_data> TYPE any,
               <lv_field>      TYPE any.

DATA: new_line  TYPE REF TO data.

AT SELECTION-SCREEN.
  DATA: l_got_state TYPE  ddgotstate.
* Validate table name
  CALL FUNCTION 'DDIF_TABL_GET'
    EXPORTING
      name     = p_table
    IMPORTING
      gotstate = l_got_state
    EXCEPTIONS
      OTHERS   = 1.
  IF l_got_state <> 'A'.
    MESSAGE 'Table does not exist' TYPE 'E'.
  ENDIF.
  IF p_table+0(1) <> 'Z' AND
     p_table+0(1) <> 'Y'.
    MESSAGE 'Please use only Z or Y tables.' TYPE 'E'.
  ENDIF.

*----------------------------------------------------------------------*
*     AT SELECTION-SCREEN ON VALUE-REQUEST
*----------------------------------------------------------------------*
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.

  CALL FUNCTION 'F4_FILENAME'
    IMPORTING
      file_name = p_file.

START-OF-SELECTION.

* Dynamically create internal table
  CREATE DATA lt_table_data TYPE TABLE OF (p_table).
  ASSIGN lt_table_data->* TO <lt_table_data>.

  CREATE DATA new_line LIKE LINE OF <lt_table_data>.
  ASSIGN new_line->* TO <ls_table_data>.

* Generate field catalog for the table
  CALL FUNCTION 'LVC_FIELDCATALOG_MERGE'
    EXPORTING
      i_structure_name = p_table
    CHANGING
      ct_fieldcat      = lt_fieldcat
    EXCEPTIONS
      OTHERS           = 1.
  IF sy-subrc <> 0.
    MESSAGE 'Error generating field catalog' TYPE 'E'.
  ENDIF.

  lv_filename = p_file.
* Read file into internal table
  CALL FUNCTION 'GUI_UPLOAD'
    EXPORTING
      filename                = lv_filename
      filetype                = 'ASC'
    TABLES
      data_tab                = lt_file_data
    EXCEPTIONS
      file_open_error         = 1
      file_read_error         = 2
      no_batch                = 3
      gui_refuse_filetransfer = 4
      OTHERS                  = 5.
  IF sy-subrc <> 0.
    MESSAGE 'Error uploading file' TYPE 'E'.
  ENDIF.

* Delete all entries from the target table
  DELETE FROM (p_table).
  IF sy-subrc = 0.
    MESSAGE 'All entries deleted from table' TYPE 'I'.
  ENDIF.

* Parse and insert data into the table

  DESCRIBE TABLE lt_file_data LINES DATA(lv_idx).
  DELETE lt_file_data INDEX lv_idx.

  LOOP AT lt_file_data INTO DATA(ls_line) FROM 4.
    CLEAR <ls_table_data>.

    LOOP AT lt_fieldcat ASSIGNING FIELD-SYMBOL(<fs_fieldcat>).
      IF <fs_fieldcat>-fieldname = 'MANDT'.
        ASSIGN COMPONENT <fs_fieldcat>-fieldname OF STRUCTURE <ls_table_data> TO <lv_field>.
        IF sy-subrc = 0.
          <lv_field> = sy-mandt.
        ENDIF.
      ELSE.
        CLEAR lv_offset.
        DO strlen( ls_line ) TIMES.
          DATA(lv_index) = sy-index.

          DATA(lv_single) = substring( val = ls_line off = lv_index - 1 len = 1 ).
          IF lv_index = 1 AND lv_single = '|'.
            lv_offset = lv_offset + 1.
          ELSEIF lv_single = '|'.   "New field
            lv_until = lv_index - lv_offset - 1.
            lv_field = ls_line+lv_offset(lv_until).
            lv_offset = lv_offset + lv_until + 1.

            ASSIGN COMPONENT <fs_fieldcat>-fieldname OF STRUCTURE <ls_table_data> TO <lv_field>.
            IF sy-subrc = 0.
              <lv_field> = lv_field.
            ENDIF.
            ls_line = ls_line+lv_offset.
            EXIT.
          ENDIF.
        ENDDO.
      ENDIF.
    ENDLOOP.

    APPEND <ls_table_data> TO <lt_table_data>.

  ENDLOOP.

* Insert data into the database table
  INSERT (p_table) FROM TABLE <lt_table_data>.
  IF sy-subrc = 0.
    MESSAGE 'Data successfully inserted into table' TYPE 'S'.
  ELSE.
    MESSAGE 'Error inserting data into table' TYPE 'E'.
  ENDIF.

HANA database partitioning

1. Introduction

The partitioning feature of the SAP HANA database splits column-store tables horizontally into disjunctive sub-tables or partitions. In this way, large tables can be broken down into smaller, more manageable parts.

Partitioning is only available for tables located in the column store. The row store does not support partitioning.

BW systems are handled separately, please refer to chapter “BW Systems”.

1.1 Reasons and background of partitioning

Following are some of the reasons, next to the advantages described later, to perform partitioning:

  • In SAP HANA, a non-partitioned column store tables can’t store more than 2 billion rows.
  • Large table / partition sizes in column store are mainly critical with respect to table optimizations like delta merges and optimize compressions (SAP Notes 2057046 – FAQ: SAP HANA Delta Merges, 2112604 – FAQ: SAP HANA Compression):
    • Memory requirements are doubled at the time of the table optimization.
    • There is an increased risk of locking issues during table optimization.
    • The CPU consumption can be significant, particularly during optimize compression runs.
    • The I/O write load for savepoints is significant and can lead to trouble like a long critical phase (SAP Note 2100009 – FAQ: SAP HANA Savepoints)
  • SAP HANA NSE: Range partitions with old data can be offloaded easier.

Therefore you should avoid using particularly large tables and partitions and consider a more granular partitioning instead. A reasonable size threshold is typically 50 GB, so it can be useful to use a more granular partitioning in case this limit is exceeded.

1.2 Best Practices

The following best practices should be kept in mind:

  • Keep the number of partitioned tables low
  • Keep the number of partitions per table low (maximum 8 partitions)
  • Maximum 100 – 200 million rows per partition (recommended).
  • Define partitioning on as few columns as possible
  • For SAP Suite on HANA, keep all partitions on same host
  • Repartitioning rules: When repartitioning, choose the new number of partitions as a multiple or divider of current number of partitions.
  • Avoid unique constraints
  • Throughput time: 10-100 G/hour

HASH partitioning on a selective column being part of the primary key; check which sorting option is used mostly.

1.3 Advantages

These are some advantages of partitioning:

  • Load Balancing: In a distributed system. Individual partition can be distributed across multiple Hosts.
  • Record count: Storing more than 2 billion rows in a table.
  • Parallelization: Operations can be parallelized by using several execution threads.
  • Partition Pruning: Queries are analyzed to see if they match the given partitioning specification of a table (STATIC) or the content of specific columns in aging tables (DYNAMIC).
    Remark: When a table is range partitioned based on MONTH and in the WHERE clause YEAR is selected, all partitions are scanned and not only the 12 partitions belonging to the year.
  • Delta merge performance: Only changed partitions must be duplicated in the RAM, instead of the entire table.

1.4 Partitioning Types

The following partioning types can be used, but normally only HASH and RANGE are used:

  • HASH: Distribute rows to partitions equally for load balancing and to overcome the 2 billion row limitation.
  • ROUND-ROBIN: Achieve an equal distribution of rows to partitions.
  • RANGE: Dedicated partitions for certain values or value ranges in a table.
  • Multi-level (HASH/RANGE) First partition on level 1, than on level 2.

1.5 Parameters

The following optional parameters can be set to optimize HANA partitioning, if required.

InifileSectionParameterValueRemark
indexserver.inijoinssingle_thread_execution_for_partitioned_tablesfalseAllow parallelization
indexserver.inipartitioningsplit_threads<number>Parallelization number for repartitioning;
80 % of max_concurrency
indexserver.initable_consistency_checkcheck_repartitioning_consistencytrueImplicit consistency check

SQL commands:

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET ('joins',' single_thread_execution_for_partitioned_tables') = 'false' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET ('partitioning','split_threads') = '<number>' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET ('table_consistency_check','check_repartitioning_consistency') = 'true' WITH RECONFIGURE;

1.6 Privileges

The following privileges should be granted to the user executing the partitioning:

  • System privilege: PARTITION_ADMIN

For the user examining the partitioning, the following privilege might also be of interest:

  • SELECT on schema

1.7 Remark on partitioning for NSE

Whenever a table is queried and if that table or partition is not present in memory HANA automatically loads it into the memory either partially or fully.

If that table is partitioned, which ever row that is getting queried, then that specific partition which has the required data gets into memory.

Even if you only need 1 row from a partition of a table which has 1 billion records, that entire partition will get loaded either partially or fully.

In HANA we can never load 1 row alone into memory from a table.

2. Determine candidates

Check which tables are larger than 50G or have more than 1 billion records:

select a.table_name, (select string_agg(column_name,', ') from index_columns where constraint = 'PRIMARY KEY' and table_name = a.table_name group by table_name) "Primary Key Columns",
case a.is_partitioned
when 'TRUE'
then (select LEVEL_1_TYPE || '(' || replace(LEVEL_1_EXPRESSION,'"','') || ')#' || LEVEL_1_COUNT from partitioned_tables where table_name = a.table_name)
else 'No'
end as "Current Partitioning",
a.record_count "Rows", to_decimal(a.table_size/1024/1024/1024,2,2) "Size GB"
from m_tables a where a.IS_COLUMN_TABLE = 'TRUE'
and (a.record_count > 1000000000 or a.table_size/1024/1024/1024 > 50)
order by a.table_name;

The output will display:

  • TABLE_NAME: Name of the table
  • Primary Key Columns: The columns on which the primary key is created
  • Current Partitioning: If the table is currently partitioned and the partition type, columns and number of partitions
  • Rows: Number of records in the table
  • Size GB: Size of the table in memory

3. Determine Partitioning Strategy

In order to choose an appropriate column for partitioning we need to analyze the table usage in depth.

3.1 Technical Tables

3.1.1 With recommendations

In case you can find exact partitioning recommendation on a specific table please follow the recommendation. Check SAP Note 2044468 – FAQ: SAP HANA Partitioning to find the latest information. Here only the most common tables are listed.

Note: Be aware that for tables making use of Data Aging (SAP Note 2416490 FAQ: SAP HANA Data Aging in SAP S/4HANA) or NSE (SAP Note 2799997 FAQ: SAP HANA Native Storage Extension (NSE)) there are certain limitations related to partitioning and so the simple standard approach isn’t possible.

Table NamePartition TypePartition Column(s)Remark
/1CADMC/<id>HASHIUUC_SEQUENCE
ACDOCAHASHBELNRif data volume is limited, otherwise see SAP Note 2289491 Best Practices for Partitioning of Finance Tables
ADRC, ADRUHASHADDRNUMBER
AFFVHASHAUFPL
AUSPHASHOBJEK
BALDATHASHLOG_HANDLE
BDSCONT10, DMS_CONT1_CD1, SBCMCONT1HASHPHIO_ID
BKPF, BSEG, BSISHASHBELNR
CDHDR, CDPOSHASHOBJECTID, CHANGENR or TABKEYUse column with best value distribution and use same column for both tables if possible, in some cases OBJECTID for CDHDR and CHANGENR for CDPOS can be the best solution
CKMLCR, CKMLKEPHHASHKALNR
COBK, COEPHASHBELNR
COFVHASHCRID
DBTABLOGHASHLOGID
EDID4, EDIDSHASHDOCNUM
EQKTHASHEQUNR
IDOCRELHASHROLE_A or ROLE_BThe column with better value distribution
 JCDS, JESTHASHOBJNR
JVTLFZUOHASHVBELN
KEPHHASHKALNR
KONVHASHKNUMV
MATDOCHASHMBLNR
MBEW, MBEWH, MVER, MYMFTHASHMATNR
MSEGHASHMBLNR
PCL2, PCL4HASHSRTFD
RESBHASHRSNUM
RSEGHASHBELNR
SOC3HASHSRTFD
SRRELROLESHASHOBJKEY
STXLHASHTDNAME
SWWCNTP0, SWWLOGHISTHASHWI_ID
VBFAHASHSoH: VBELV S/4HANA: RUUID

3.1.2 Without recommendations

If there is no specific recommendation for partitioning a table by SAP (e.g. because this is not listed (2044468 FAQ: SAP HANA Partitioning) or the table is a customer specific table) please follow the approach as described in chapter “3.3 Other tables”.

3.2 Finance Tables

There are some recommendation in SAP Note 2289491 Best Practices for Partitioning of Finance Tables for finance tables. Please find some of them below:

Table NamePartition TypePartition Column(s)Remarks
ACDOCARANGEFISCYEARPER
FAGLFLEXA (n/a for S4/HANA)RANGERYEARIf data volume is not significantly above 1 billion records per year
RANGERBUKRSOnly if there is a reasonable data distribution by company code and the expected data volume of the largest company code is not significantly above 1 billion records
HASHDOCNRIf none of the above is possible
BSEGBSE_CLRHASHBELNR BELNR_CLRTry to keep BSEG as small as possible by summarization or other options, e.g. described in note 2591291 FAQ S/4HANA: Error F5 727 when posting via Accounting Interface.
BSIS, BSAS, BSID, BSAD, BSIK, BSAK (n/a for S4/HANA)RANGEBUKRSBELNR
If partitioning by company code not reasonable or possible
FAGL_SPLINFO, FAGL_SPLINFO_VALHASHBELNR
ACCTIT, ACCTHD, ACCTCRHASHAWREFOnly if SAP Note 178476 High increase of table ACCTIT, ACCTHD or ACCTCR is not applicable

3.3 Other tables

3.3.1 Check with functional teams

Check with the functional team(s).

Ask them which column(s) they query frequently and which column is always part of the where clause .

If they are not very clear on the same we can help them with the plan cache data.

3.3.2 Check M_SQL_PLAN_CACHE

With the help of below query we can get a list of queries that are to identify the where clause:

select top 10 upper(SUBSTR_AFTER(STATEMENT_STRING, 'WHERE')), EXECUTION_COUNT, TOTAL_EXECUTION_TIME
from M_SQL_PLAN_CACHE
where STATEMENT_STRING like '%TABLE%'
and not STATEMENT_STRING like 'select upper(SUBSTR_AFTER(STATEMENT_STRING%'
and not upper(SUBSTR_AFTER(STATEMENT_STRING, 'WHERE')) like ''
and not upper(SUBSTR_AFTER(STATEMENT_STRING, 'WHERE')) like '%UNION%'
and TOTAL_EXECUTION_TIME > 0 and EXECUTION_COUNT > 5
order by TOTAL_EXECUTION_TIME desc;

From the result you have to analyze the where clause and find a common pattern.

Let’s assume that table CDPOS is accessed mostly via MANDT and CHANGENR (This is by the way the case in many SAP customer systems), the solution would be to implement range-range multi-level partitioning for column MANDT (level 1) and column CHANGENR (level 2).

For sure some SQL statements will have to look into several partitions of one MANDT when CHANGENR is not used in the where-clause.

3.3.3 Join Statistics

Check the columns that are getting joined on this table. Use SQL script HANA_SQL_Statistics_JoinStatistics_1.00.120+ from OSS note 1969700 – SQL Statement Collection for SAP HANA and modify the SQL like below.

  ( SELECT                    /* Modification section */
      '1000/10/18 07:58:00' BEGIN_TIME,                  /* YYYY/MM/DD HH24:MI:SS timestamp, C, C-S<seconds>, C-M<minutes>, C-H<hours>, C-D<days>, C-W<weeks>, E-S<seconds>, E-M<minutes>, E-H<hours>, E-D<days>, E-W<weeks>, MIN */
      '9999/10/18 08:05:00' END_TIME,                    /* YYYY/MM/DD HH24:MI:SS timestamp, C, C-S<seconds>, C-M<minutes>, C-H<hours>, C-D<days>, C-W<weeks>, B+S<seconds>, B+M<minutes>, B+H<hours>, B+D<days>, B+W<weeks>, MAX */
      'SERVER' TIMEZONE,                              /* SERVER, UTC */  
      '%' HOST,
      '%' PORT,
      '<SAP Schema>' SCHEMA_NAME,
      '<TABLE_NAME>' TABLE_NAME,
      '%' COLUMN_NAME,
      'TABLE' ORDER_BY            /* TABLE, REFRESH_TIME, MEMORY */
    FROM
      DUMMY

From the output, you can determine on which column joins are happening most and hence a HASH on this column will make the query runtime faster.

3.3.4 Enable SQL Trace

Enable SQL trace for specific table.

3.3.5 No specific range values

When there is no specific range values that are frequently queried and there is a case like most of the columns are used most of the times, a HASH algorithm will be a best fit.

It is similar to round robin partition, but data will be distributed according to the hash algorithm on their one or two designated  primary key columns:

  • A Hash algorithm can only happen on a PRIMARY key field.
  • Do NOT choose more than 2 primary key field for HASH
  • Within the primary key, check for which row has maximum distinct records. That specific column can be chosen for re-partition.

To determine which primary key column can be chosen for re-partitioning, perform below steps.

  1. Load the table fully into memory:
    LOAD TABLE ALL;
  2. Select the Primary Key columns and the distinct records per column:
    select a.column_name, sum(b.distinct_count)
    from index_columns a, m_cs_columns b
    where a.table_name = 'TABLE'
    and a.constraint = 'PRIMARY KEY'
    and a.table_name = b.table_name
    and a.column_name = b.column_name
    group by a.column_name
    order by sum(b.distinct_count) desc;

3.3.6 NSE-based partitioning

Check if there is a column with a date-like format in the primary key. It might be a candidate for NSE based partitioning.

select distinct a.COLUMN_NAME, a.DATA_TYPE_NAME, a.LENGTH, a.DEFAULT_VALUE
from TABLE_COLUMNS a, INDEX_COLUMNS b
where a.TABLE_NAME = b.TABLE_NAME
and a.COLUMN_NAME = b.COLUMN_NAME
and replace(a.DEFAULT_VALUE,'0','') = ''
and length(a.DEFAULT_VALUE) >= 4
and not a.DEFAULT_VALUE = ''
and a.LENGTH between 4 and 14
and b.CONSTRAINT = 'PRIMARY KEY'
and a.TABLE_NAME = 'TABLE';

This query only gives an indication. To be sure, query the table itself.

Run the following query and execute the output as the schema owner:

select 'select top 5 ' || string_agg(COLUMN_NAME,', ') || ' from ' || TABLE_NAME || ';'
from INDEX_COLUMNS where CONSTRAINT = 'PRIMARY KEY' and TABLE_NAME = 'TABLE' group by TABLE_NAME

Output looks like:

select top 5 MANDT, KALNR, BDATJ, POPER, UNTPER from CKMLPP;

Execute the output-query and the result looks like:

MANDT,KALNR,BDATJ,POPER,UNTPER
"100","000117445808","2016","012","000"
"100","000101228530","2016","012","000"
"100","000112967972","2016","012","000"
"100","000112967974","2016","012","000"
"100","000101253542","2016","012","000"

In this case the columns BDATJ contains the YEAR and column POPER contains the MONTH.

Another example might be table DBTABLOG:

select top 5 distinct LOGDATE, LOGTIME, LOGID from DBTABLOG;

Execute the output-query and the result looks like:

LOGDATE,LOGTIME,LOGID
"20200522","225740","721651"
"20200522","114541","061031"
"20200522","114541","063115"
"20200522","114541","064398"
"20200522","114541","065345"

In this case column LOGDATA contains a date format, but we need to convert this to the YEAR. This can be done by dividing LOGDATA by 10000 (from 8 characters to 4 character).

4. Determine Partitions

4.1 Ranges

In case of RANGE partitioning, to get an indication of the number of partitions and the number of rows per partition, execute the queries as mentioned in below chapters, depending on the column value that can be used.

Eventually some partition ranges can be combined.

The ranges (start and end), the number of records and the estimated size per partition will be displayed.

Note: these queries should be executed by the SAP Schema user!

4.1.2 Range on part of column

When a part of column can be used, the value has to be divided by the number of characters you want to remove.

This is the case when you want to use the YEAR from a string that contains YEAR, MONTH, DAY and TIME.

Note: Edit the query with the correct TABLE, with the chosen COLUMN and if required change the divider number. For example, if you want to remove 4 characters from the column, devide by 10000 (1 with 4 zeroes).

Example:

  • From 8 to 4 characters: divide by 10000 (4 zeroes)
  • From 8 to 6 characters: divide by 100 (2 zeroes)

Query:

select "Start", "End", "Rows", to_decimal("Est_Size_Gb"*"Rows"/1024/1024/1024,2,2) "Est_Size_GB" from (select to_decimal(COLUMN/RANGE,1,0)*RANGE "Start", to_decimal(COLUMN/RANGE,1,0)*RANGE+RANGE "End", count(*) "Rows", (select table_size/record_count from m_tables where table_name = 'TABLE') "Est_Size_Gb"
from TABLE
group by to_decimal(COLUMN/RANGE,1,0)*RANGE
order by to_decimal(COLUMN/RANGE,1,0)*RANGE);

4.1.2 Range on entire column

The entire column value can be used as a partition.

This is the case when you want to use the YEAR from a string that equals the YEAR.

Query:

select "Start", "Rows", to_decimal("Est_Size_Gb"*"Rows"/1024/1024/1024,2,2) "Est_Size_GB"
from (select COLUMN "Start", count(*) "Rows", (select table_size/record_count from m_tables where table_name = 'TABLE') "Est_Size_Gb"
from TABLE
group by COLUMNorder by COLUMN);

4.2 Determine number of partitions

In case of HASH partitioning, the number of partitions has to be determined.

A good sizing method is:

  • Keep the number of partitions per table low (maximum 8 partitions)
  • Maximum 100 – 200 million rows per partition (recommended).
  • This can be done with the following command.

The number of partitions can be determined with the following query, based on the number of records in the table, divided by 100 million (when too many partitions (> 8), divide by a higher value) or the size of the table in GB devided by 50:

select to_decimal(round(RECORD_COUNT/100000000,0,round_up),1,0) PART_ON_ROWS,
to_decimal(round(table_size/1024/1024/1024/50,0,round_up),1,0) PART_ON_SIZE
from m_tables where TABLE_NAME = 'TABLE';

5 Implementation

5.1 SQL Commands

The following commands can be used to perform the actual partitioning of tables:

ActionSQL Command
HASH PartitioningALTER TABLE TABLE PARTITION BY HASH (COLUMN) PARTITIONS X;
ROUND-ROBIN PartitioningALTER TABLE TABLE PARTITION ROUNDROBIN X;
RANGE Partitioning
On part of column
ALTER TABLE TABLE PARTITION BY RANGE (COLUMN)
(PARTITION 0 <= VALUES < 1000,
PARTITION XXX <= VALUES < YYY,
PARTITION OTHERS);
RANGE Partitioning
On full column
ALTER TABLE TABLE  PARTITION BY RANGE (COLUMN)(PARTITION VALUE = 1000, PARTITION VALUE = 2000, …, PARTITION OTHERS);
Multi-level (HASH/RANGE) PartitioningALTER TABLE TABLE PARTITION BY HASH (COLUMN1) PARTITIONS X,
RANGE (COLUMN2)
(PARTITION 0 <= VALUES < 1000,
PARTITION XXX <= VALUES < YYY,
PARTITION OTHERS);
Move partitions to other serversALTER TABLE TABLE MOVE PARTITION X TO 'server_name:3nn03' PHYSICAL;
Add new RANGE to existing partioned table
On part of column
ALTER TABLE TABLE ADD PARTITION XXX <= VALUE < YYY;
Add new RANGE to existing partioned table
On full column
ALTER TABLE TABLE ADD PARTITION VALUE = YYY;
Drop existing RANGEALTER TABLE TABLE DROP PARTITION XXX <= VALUE < YYY;
Adjust partitioning typeALTER TABLE TABLE PARTITION …
Delete partitioningALTER TABLE TABLE MERGE PARTITIONS;

5.2 Automatic new partitions

There is an automatic way to add new partitions besides dynamic range partitioning by record threshold. Starting with SPS06 there is a new interval option for range partitions.

When a new dynamic partition is required, SAP HANA renames the existing OTHER partition appropriately and creates a new empty partition.

Thus, no data needs to be moved and the process of dynamically adding a partition is very quick.

With this feature you can use the following parameters to automatize the split of the dynamic partition based on the number of records.

InifileSectionParameterDefaultUnitRemark
indexserver.inipartitioningdynamic_range_default_threshold10000000rowsautomatic split once reached row threshold
Indexserver.inipartitioningdynamic_range_check_time_interval_sec900sechow often the threshold check is performed

Note: These threshold can be changed to meet the requirements. It can also be set per table, with: ALTER TABLE T PARTITION OTHERS DYNAMIC THRESHOLD 500000;

The partitioning columns need to be dates or numbers.
Dynamic interval is only supported when the partition column type is TINYINT, SMALLINT, INT, BIGINT, DATE, SECONDDATE or LONGDATE.
If no <interval_type> is specified, INT is used implicitly.

To check the DATA_TYPE of the selected column, execute the following query:

select COLUMN_NAME, DATA_TYPE_NAME, LENGTH
from TABLE_COLUMNS
where TABLE_NAME = 'TABLE'
and COLUMN_NAME = 'COLUMN';

Examples:

ActionSQL Command
Quarterly new partitionALTER TABLE TABLE PARTITION OTHERS DYNAMIC INTERVAL 3 MONTH;
Half yearly new partitionALTER TABLE TABLE PARTITION OTHERS DYNAMIC INTERVAL 6 MONTH;
After 2 years new partitionALTER TABLE TABLE PARTITION OTHERS DYNAMIC INTERVAL 2 YEAR;

5.3 Check Progress

To check the overall progress of a running partitioning execution, run the following query:

select 'Overall Progress: ' || to_decimal(sum(CURRENT_PROGRESS)/sum(MAX_PROGRESS)*100,2,2) || '%'
from M_JOB_PROGRESS
where JOB_NAME = 'Re-partitioning'
and OBJECT_NAME like 'TABLE%';

To check the progress in more detail, run:

select TO_NVARCHAR(START_TIME,'YYYY-MM-DD HH24:MI:SS') START_TIME, to_decimal(CURRENT_PROGRESS/MAX_PROGRESS*100,3,2) || '%' "PROGRESS%", OBJECT_NAME, PROGRESS_DETAIL
from M_JOB_PROGRESS
where JOB_NAME = 'Re-partitioning'
and OBJECT_NAME like 'TABLE%';

6 Aftercare

6.1 HASH Partitioning

For HASH Partitioning, regularly check the number of records per partition and consider repartitioning.

When repartitioning, choose the new number of partitions as a multiple or divider of current number of partitions.

6.2 RANGE Partitioning

For tables with RANGE partitioning, new partitions should be created regularly, when a new range is reached.

Old partitions which are not required anymore, can be dropped.

Next to that, checks need to be performed that not too many rows reside in the OTHERS partition.

To review if new partitions should be added to existing partitioned tables and if records are present in the “OTHERS” partition, execute the following query:

select a.table_name, replace(b.LEVEL_1_EXPRESSION,'"','') "Column", b.LEVEL_1_COUNT "Partitions",
max(c.LEVEL_1_RANGE_MIN_VALUE) "Last_Range_From",
CASE max(c.LEVEL_1_RANGE_MAX_VALUE) WHEN max(c.LEVEL_1_RANGE_MIN_VALUE) THEN 'N/A' ELSE max(c.LEVEL_1_RANGE_MAX_VALUE) END "Last_Range_To",
(select record_count from m_cs_tables where part_id = b.LEVEL_1_COUNT and table_name = a.table_name) "Rows in OTHERS"
from m_tables a, partitioned_tables b, table_partitions c
where a.IS_COLUMN_TABLE = 'TRUE'
and a.is_partitioned = 'TRUE'
and b.level_1_type = 'RANGE'
and a.table_name = b.table_name
and b.table_name = c.table_name
and b.LEVEL_1_COUNT > 1
group by a.table_name, b.LEVEL_1_EXPRESSION, b.LEVEL_1_COUNT
order by a.table_name;

When the “Last_Range_To” column is a date-like column and the date-like partition is already or almost past, add a new partition.

For the tables that have non-zero values in column “Rows in OTHERS”, run the following check to determine the reason why they are in others and if extra partitions should be added.

Note: Edit the query with the correct TABLE and COLUMN and if required change the divider number. For example, if you want to remove 4 characters from the column, devide by 10000 (1 with 4 zeroes).

Example:

  • From 8 to 4 characters: divide by 10000 (4 zeroes)
  • From 8 to 6 characters: divide by 100 (2 zeroes)
select a."Start", a."End", case when b.part_id is not null then to_char(b.part_id) else 'OTHERS' end "Partition"
from (select to_decimal(COLUMM/RANGE,1,0)*RANGE"Start", to_decimal(COLUMM/RANGE,1,0)*RANGE+RANGE "End"
from TABLE
group by to_decimal(COLUMM/RANGE,1,0)*RANGE
order by to_decimal(COLUMM/RANGE,1,0)*RANGE) a
left outer join
(select part_id,
case when level_1_range_min_value <> '' then to_decimal(level_1_range_min_value,1,0) else '0' end "Start",
case when level_1_range_max_value <> '' then to_decimal(level_1_range_max_value,1,0) else '0' end "End"
from table_partitions where table_name = 'TABLE') b
on a."Start" >= b."Start" and a."Start" < b."End";

All these actions can be done with the commands as specified in chapter “5.1 SQL Commands”.

6.3 Partitioning Consistency Check and Repair

Once partitioning has been implemented, some consistency checks can and should be performed regularly.

To ensure consistency for partitioned tables, execute checks and repair statements, if required.

You can call general and data consistency checks for partitioned tables to check, for example, that the partition specification, metadata, and topology are correct.

If any of the tests encounter an issue with a table, the statement returns a row with details on the error. If the result set is empty (no rows returned), no issues were detected.

6.3.2 General check

Checks the consistency among partition specification, metadata and topology:

CALL CHECK_TABLE_CONSISTENCY('CHECK_PARTITIONING', 'SCHEMA', 'TABLE’);

6.3.2 Extended check

General check plus check whether all rows are located in correct parts:

CALL CHECK_TABLE_CONSISTENCY('CHECK_PARTITIONING_DATA', 'SCHEMA', 'TABLE’);

6.3.2 Dynamic range partitioning check

Check for illegal data in a dynamic range OTHERS partition.

Only sequential numerical data is permitted in such a partition, but a varchar column for example, could include illegal characters.

This check will only work for others partitions which are dynamic range enabled:

CALL CHECK_TABLE_CONSISTENCY('CHECK_PARTITIONING_DYNAMIC_RANGE', 'SCHEMA', 'TABLE’);

6.3.4 Repairing rows that are located in incorrect parts

If the extended data check detects that rows are located in incorrect partitions this may be repaired by executing:

CALL CHECK_TABLE_CONSISTENCY('REPAIR_PARTITIONING_DATA', 'SCHEMA', 'TABLE’);

7 BW Systems

BW takes care of the partitioning of its tables on its own, manual intervention is usually not required. You mainly have to take care that the table placement configuration is maintained properly (SAP Notes 1908075 SAP BW on SAP HANA: Table Placement and Landscape Redistribution  and 2334091 SAP BW/4HANA: Table Placement and Landscape Redistribution ). Table distribution (SAP Note 2143736 FAQ: SAP HANA Table Distribution for BW) will then implement the configuration.

The number of first level partitions depends on number of records in the largest table of the table group respectively the TABLE_PLACEMENT configuration.

Default scenario:

RecordsPartitions
< 40 million1
40 – 120 million3
120 – 240 million6
> 240 million12

7.1 Table Placement and Landscape Redistribution

Please check SAP Note 1908075 – SAP BW on SAP HANA: Table Placement and Landscape Redistribution.

In all SAP BW on SAP HANA systems (single node and scale-out) maintain the table placement rules and the required SAP HANA parameters as follows and explained below.

7.1.1 Determine placement strategy

Extract the file TABLE_PLACEMENT.ZIP from the attachment of SAP Note 1908075 SAP BW on SAP HANA: Table Placement and Landscape Redistribution. According to the following matrix, you can determine the suitable file (or files) and the corresponding folder:

HANA 1.0 SPS12
HANA 2.0
Single nodeScale-out with 
1 Index server coordinator
+ 1 active index server worker node
Scale-out with
1 Index server coordinator
+ 2 active index server worker nodes
Scale-out with 
1 Index server coordinator
+ 3 or more active index server worker nodes
For systems up to 
and including 2 TB per node
010 = Single node
(up to and including 2 TB)
020 = Scale-out
(up to and including 2 TB per node)
with 1 coordinator and 1 worker node
030 = Scale-out
(up to and including 2 TB per node)
with 1 coordinator and 2 worker nodes
040 = Scale-out
(up to and including 2 TB per node)
with 1 coordinator and 3 or more worker nodes
For systems with 
more than 2 TB per node
050 = Single node
(more than 2 TB)
060 = Scale-out
(more than 2 TB per node)
with 1 coordinator and 1 worker node
070 = Scale-out
(more than 2 TB per node)
with 1 coordinator and 2 worker nodes
080 = Scale-out
(more than 2 TB per node)
with 1 coordinator and 3 or more worker nodes

Notes:

  • Scale-out configurations with less than two active index server worker nodes are not recommended. If the scale-out system uses <= 2 TB per node. See SAP Note 1702409 for details.
  • InfoCubes and classic/advanced DataStore Objects in scale-out systems are distributed over all nodes – including the coordinator – if the nodes are provided with more than 2 TB main memory. This optimizes memory usage of all nodes – including the coordinator. If this frequently leads to situations with excessively high CPU load on the coordinator, certain BW objects must be distributed to other nodes to reduce the CPU load.
  • With SAP HANA 1.0 SPS 12 and SAP HANA 2.0, a table for a BW object can have more partitions at the first level than there are valid locations (hosts) for this table, but only if the main memory of the nodes exceeds 2 TB. This is achieved by setting the parameter ‘max_partitions_limited_by_locations’ to ‘false’. The maximum number of partitions at the first level is limited by the parameter ‘max_partitions’. Its value is set depending on the number of hosts.
    Some operations in HANA use parallel processing on partition level. If there are more partitions, it is ensured that the CPU resources on larger HANA servers are used more efficiently. If this frequently leads to situations with excessively high CPU load on some or all HANA nodes, it may be necessary to manually adjust the partitioning rules (for some BW objects) to reduce the CPU load. In this context ‘manually’ means that a customer defined range partitioning at the second level must be adjusted, or additional, BW object-specific table placement rules must be created. Please note that manual changes to the partitioning specification of BW-managed tables at database level (via SQL statements) are not supported.
  • In scale-out systems with 1.5 or 2 TB per nodeand efficient BW housekeeping, the memory usage on the coordinator may be low because InfoCubes and DataStore Objects are not distributed across all nodes (as is the case for systems with more than 2 TB per node). In this scenario, it is not supported to use the rules for table placement for systems with more than 2 TB per node. Instead, you can check the option of identifying DataStore Objects (advanced)that are used as corporate memory, on which therefore little or no reporting takes place, and placing these objects on the coordinator node. This may be a workaround to make better use of the main memory on the coordinator node without causing a significant increase in the CPU load on the coordinator. DataStore objects (advanced) can be placed on the coordinator node with the aid of object-specific rules for table placement. These customer-specific rules for table placement must be reversed if this causes an overly high memory or CPU load on the coordinator node.

    Prerequisites:
    • only for scale-out systems with 1.5 or 2 TB per node
    • only for DataStore Objects (advanced) not for classic DataStore Objects (due to the different partitioning specifications)
    • only for DataStore Objects (advanced) that are used as corporate memory i.e. DataStore Objects (advanced) without activation
    • sizing rules as documented in the attachment ‘advanced DataStore objects type corporate memory on coordinator node.sql’ must be respected

7.1.2 Maintain scale-out parameters

In SAP BW on SAP HANA scale-out systems only, maintain the parameters recommended in SAP Note 1958216 for the SAP HANA SPS you use.

In an SAP HANA scale-out system, tables and table partitions are assigned to an indexserver on a particular host when they are created. As the system evolves over time you may need to optimize the location of tables and partitions by running automatic table redistribution.

Different applications require different configuration settings for table redistribution.

For systems running SAP BW on SAP HANA or SAP BW/4HANA – SAP HANA 2.0 SPS 04 and higher:

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','SYSTEM') SET ('table_redist','balance_by_execution_count') = 'false' WITH RECONFIGURE COMMENT 'See SAP Note 1958216';

7.1.3 Check Table Classification

Make sure that all BW tables have the correct table classification.

For an existing SAP BW on SAP HANA system, you can check the table classification using the report RSDU_TABLE_CONSISTENCY and correct it if necessary. For information about executing the report, see SAP Note 1937062.

This note provides a file “rsdu_table_consistency_<version>.pdf”.

The report does not available on BW4/HANA systems. Please check the note 2668225 – Report RSDU_TABLE_CONSISTENCY is deprecated with SAP BW/4HANA.

Follow the instructions from the latest pdf file.

During the migration of an SAP BW system via SWPM, the report SMIGR_CREATE_DDL ensures that the correct table classification is set.
Before using one of the two reports, implement the current version of the SAP Notes listed in the to SAP Note 1908075 attached file REQUIRED_CORRECTION_NOTES.XLSX. Filter the list in accordance with your release and Support Package, and implement the notes in the specified sequence using transaction SNOTE.

In a heterogeneous system migration using the Software Provisioning Manager (SWPM), you also have to implement these SAP Notes in the source system before you run the report SMIGR_CREATE_DDL. If you perform the migration using the Database Migration Option (DMO), you do not have to implement the SAP Notes in the source system. Instead, you should implement the SAP Notes in the target system or include a transport with those SAP Notes when the DMO prompts you to do so.

The operation of the report is strongly divided into two parts: Check and Repair, which can’t be combined in a single run!

7.1.3.1 Check tables for inconsistencies

The check for inconsistencies is pure read-only for both HANA and BW.

In Tx SE38, run report RSDU_TABLE_CONSISTENCY.

Select “Store issues” and select all checkboxes:

Run the report.


7.1.3.2 Display table consistencies

When you run the report with “Show issues in GUI”, the issues will be displayed.

When you run the report with “Store issues”, the issues will be displayed when executing in foreground.

An example is shown below:

When the report has been executed in the background with option “Store issues”, you can rerun the report and choose “Show”.

The issues only will be displayed:

When you double click on an issue, the details will be displayed:

If you want to repair the issue, select the line and click Save and go back (number of selected items will be displayed):

Please keep I mind, that the displayed columns in the table at different check-scenarios differ. Usually following information will be provided with all scenarios:

  • Exception: Provides information on the severity of the issue. Red: Inconsistency found or error occurred. There is a need of repair, or failure must be eliminated with further tools Yellow: Warning of unexpected results, but there’s no immediate action needed Green: additional info – no action required
  • Status: indicates the current state of the issue regarding a possible repair action:
    • OK: no error or inconsistency – just info
    • REPAIRABLE: this inconsistency should be repairable within RSDU_TABLE_CONSISTENCY
    • IRREPARABLE: an inconsistency or an error occurred during the check which can’t be solved within the report. Additional actions or analysis needed to solve this issue
    • FAILED: an inconsistency, in which a repair attempt failed. Refer the entry in column ‘Reason’.
    • REPAIRED: indicates that the issue was successfully repaired.
  • Type: shows the type of table like Fact tables, PSA etc.
  • Reason: This describes the reason why a table is classified as inconsistent. For errors that have occurred during the check, the error text is shown. Some frequently occurring errors are described in section 6 (“Frequently obtained error messages and warnings” at page 14). 5. Table: shown the table name
  • BW Object: shows the BW Object (InfoCube name, DSO name etc.) the table is liked with.

7.1.3.3 Repair table inconsistencies

A user must first select the issues to be repaired before it can start the repair sequence. Repairing an inconsistence always performs a write action on HANA table properties – the repair will never chance any BW metadata!

Run report RSDU_TABLE_CONSISTENCY again and select repair (with the number of items selected).

Execute the program in background.

Check in SM37 the job log and spool output:

Rerun the check report from chapter “7.1.3.1 Check tables for inconsistencies”. All should be green now!

7.1.4 Perform Database Backup

Perform a full HANA Database backup!

7.1.5 Start Landscape Redistribution

Start the landscape redistribution. Depending on which tool you are using, you will find detailed instructions in the SAP HANA Administration Guide or in the SAP HANA Data Warehousing Foundation – Data Distribution Optimizer.

These actions have to be executed as SYSTEM user in the HANA Tenant DB, preferably from the HANA Studio.

In HANA Studio, go to the Administration Console and select tabs “Landscape” and then “Redistribution”.

Note: These steps can take a long time and should be executed in quiet windows.

7.1.5.1 Save current Table Distribution

Save the current table distribution.

7.1.5.2 Optimize Table Distribution

In the Administration Console, tabs “Landscape” and then “Redistribution”, select the “Optimize Table Distribution” and click Execute. Keep the default settings and click Next. List of the newly, to be implemented, Table Distribution is displayed. Click Execute to start the actual Table Redistribution. The progress can again be followed.

7.1.5.3 Optimize Table Partitioning

In the Administration Console, tabs “Landscape” and then “Redistribution”, select the “Optimize Table Partitioning” and click Execute. Keep the default settings and click Next. A list of the newly, to be implemented, Table Partitioning is displayed. Click Execute to start the actual Table Repartitioning. The progress can again be followed.

9 References

Credits: Rob Kelgtermans

Wily Introscope tips & tricks

Wily Introscope is used in SAP JAVA and BO(DS) systems for diagnostics and analysis. This blog will give tips & tricks for the Wily Introscope tool.

General KBA OSS note for Wily Introscope: 3013495 – Central KBA for Introscope (SV-SMG-DIA-WLY and XX-PART-WILY). General public websites is on this link.

Landscape considerations and restrictions

OSS notes:

Installation and support

OSS notes:

Be aware that Wily Introscope is a 3rd party product with short lifecycle. Typically only 1 year support.

Other relevant OSS notes



SAP audit log data archiving BC_SAL

SAP audit log can have high volumes. For that reason most companies use file system to write the audit log.

If you have a SIEM (Security Information and Event Management) solution (like SAP enterprise treat detection, Onapsis, SecurityBridge and many more) that needs to scan your audit log, the best way is to store the audit log data in the database. This ensures that the audit log can be analyzed at high performance.

The unfortunate side of storing in the database means that the audit log table can grow quite fast, which is expensive especially if you run on HANA.

To counter the high costs, you can unload the data from the database into archive files for SAP audit log. This means that your most recent data is in the database for fast analysis, and your history is on cheaper disc storage.

This blog will explain how to archive production order data via object BC_SAL. Generic technical setup must have been executed already, and is explained in this blog.

If you still have files, you must first move the files to database table, before you can archive them. See OSS note 3055825 – RSAU_LOAD_FILES for transferring audit log data to the SAL database.

Object BC_SAL

Go to transaction SARA and select object BC_SAL.

Dependency schedule is empty, so there are no dependencies:

Main table that is archived:

Technical programs and OSS notes

Write program: RSAU_ARCHIVE_WRITE

Delete program: RSAU_ARCHIVE_DELETE

Read from archive: RSAU_ARCHIVE_READ

Reload from archive: RSAU_ARCHIVE_RELOAD (handle with care, can always lead to inconsistencies and issues), see OSS note 3094328 – RSAU_ARCHIVE_RELOAD | Reloading Security Audit Log archives

Relevant OSS notes:

Application specific customizing

For archiving object BC_SAL there is no application specific customizing needed.

Executing the write run and delete run

A high level description of the run is given in OSS 3137004 – How to archive and delete audit log from DB.

In transaction SARA, BC_SAL select the write run:

Select your data, save the variant and start the archiving write run.

After the write run is done, check the logs. BC_SAL archiving has high speed, and high percentage of archiving (normal is 100%).

Proved a good name for the archive file for later use!

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Data retrieval is via program RSAU_ARCHIVE_READ, or SARA and select the read button:

After pressing execute, you will get a popup screen to select the archive files. Output is a simple ALV list with the audit log event details.

Audit log integrity protection

This blog will explain how to switch on integrity protection for file based audit log. For full explanation of the SAP audit log, read this blog.

The main OSS note for this feature is 2033317 – Integrity protection format for Security Audit Log.

Activation steps

Step 1. In RZ11 set parameter rsau/integrity to 1.

Step 2. In transaction RSAU_CONFIG set the Protection format active tick box in the Parameter section:

Step 3. In transaction RSAU_ADMIN and create the HMAC key:

Step 4. Save this HMAC key properly including the passphrase!

Checking and validation steps

To validate if the audit log files integrity is ok (no tampering has been done), start transaction RSAU_ADMIN and select the option: Check Integrity of the Files:

Now run and see the results.

You can also run program RSAU_FILE_ADMIN in batch mode (for example every weekend), so that the integrity checking is done on regular basis. In that case, you can use the faster option to Display the Last Integrity Check Status.

Reference OSS notes