Data archiving: customer and vendor master data

This blog will explain how to archive customer and vendor master data via objects FI_ACCRECV and FI_ACCPAYB. Generic technical setup must have been executed already, and is explained in this blog.

Most use of this archiving is when customers and vendors are created wrongly, to get them deleted from the system.

The below is mainly focusing on traditional ECC system. In S4HANA system both customers and vendors are integrated as business partners. For archiving sections of business partners for customer and / or vendors, read OSS note 3321585 – Archiving for Business Partner and Customer / Suppliers.

If you also want to archive/delete the LFC1 and KNC1 tables, also implement the FI_TF_DEB and FI_TF_CRE archiving objects.

Object FI_ACCRECV (customers)

Go to transaction SARA and select object FI_ACCRECV (customers).

Dependency schedule:

A lot of dependencies. Everywhere a customer number is used in an object. This makes it almost impossible to archive a customer master record. But still: it can be done to delete wrongly created master data if no transaction data is created yet.

Main tables that are archived:

  • KNA1: General customer master data
  • KNB1: Company code specific customer master data

Object FI_ACCPAYB (vendors)

Go to transaction SARA and select object FI_ACCPAYB (vendors).

Dependency schedule:

Quite some dependencies. Everywhere a customer number is used in an object. This makes it almost impossible to archive a vendor master record. But still: it can be done to delete wrongly created master data if no transaction data is created yet.

Main tables that are archived:

  • LFA1: General vendor master data
  • LFB1: Company code specific vendor master data

Technical programs and OSS notes

Write program customers: FI_ACCRECV_WRI

Delete program customers: FI_ACCRECV_DEL

Write program vendors: FI_ACCPAYB_WRI

Delete program vendors: FI_ACCPAYB_DEL

Relevant OSS notes:

Application specific customizing

There is no application specific customizing for customer and vendor archiving. You can use XD06 for customer master deletion flag setting and XK06 for vendor master deletion flag setting.

Executing the write run and delete run Customers

For customers: in transaction FI_ACCRECV select the write run:

Important is the consideration of the validation links and the deletion indicator. Customer deletion indicator flag can be set with transaction XD06.

Select your data, save the variant and start the archiving write run.

There is a sequence inconsistency. The online help has sequence FI, SD, general. The OSS note 788105 - Archiving FI_ACCRECV has sequence SD, FI, general.

You have to do the run three times: for FI, SD and general.

Deletion run is standard by selecting the archive file and starting the deletion run.

Executing the write run and delete run Vendors

For customers: in transaction FI_ACCPAYB select the write run:

Important is the consideration of the validation links and the deletion indicator. Vendor deletion indicator flag can be set with transaction XK06.

Select your data, save the variant and start the archiving write run.

You have to do the run three times: for FI, MM and general. A sequence is not given in OSS note, nor in online help.

Deletion run is standard by selecting the archive file and starting the deletion run.

Table AQLDB clean up

Table AQLDB is used for storing data of SAP Queries created via SQ01 queries. Cleaning this up can be tricky job.

Deletion of old query data

Deletion of old SAP query data is performed via program RSAQQLRE_MASS:

Background OSS note: 2336268 – SQ01: Reorganization of saved lists.

In case of inconsistencies, install program Z_INCONSISTENT_SAVED_LISTS from OSS note 2173291 – Saved List cannot be deleted by program RSAQQLRE.

Clean up of generated programs

Program RSAQDEL0 can be used to clean up generated query program data:

System queries, generated programs and obsolete programs can be cleaned up.

Background notes:

Data archiving: material ledger data

This blog will explain how to archive material ledger data via object CO_ML_DAT. Generic technical setup must have been executed already, and is explained in this blog.

Object CO_ML_DAT

Go to transaction SARA and select object CO_ML_DAT.

Dependency schedule:

Main tables that are archived:

  • CKMLCR (material ledger data)
  • CKMLPP (period totals)

Technical programs and OSS notes

Write program: SAPRCKMN_NEU

Delete program: SAPRCKMO_NEU

Read from archive: SAPRCKMP_NEU_LESEN

Reload program: SAPRCKMP_NEU_RUECKLADEN

Relevant OSS notes:

Application specific customizing

Archiving object CO_ML_DAT has no specific customizing. Retention period is set on the write program screen.

Executing the write run and delete run

In transaction SARA, CO_ML_DAT select the write run:

Select your data, save the variant and start the archiving write run.

After the write run is done, check the logs. CO_ML_DAT archiving has high speed, and high percentage of archiving (up to 100%).

Proved a good name for the archive file for later use!

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Data retrieval is via program SAPRCKMP_NEU_LESEN. But the retrieval has no proper selection. Output is also hard to read.

Archiving MM- Accounting interface posting data

This blog will explain how to archive MM accounting interface posting data via object MM_ACCTIT. Generic technical setup must have been executed already, and is explained in this blog.

The MM_ACCTIT is strange object. Since it is about intermediate data which is normally not viewed by users, there is no read program. In worst case there is a reload program. Do also read about the option to stop using these intermediate tables if the business does not require them.

Avoiding data in MM-accounting interface tables

Please check OSS note 48009 – Tables ACCTHD, ACCTIT, ACCTCR: Questions and answers to see if you can fully avoid data being written to the tables.

Object MM_ACCTIT

Go to transaction SARA and select object MM_ACCTIT.

Dependency schedule (no dependency):

Tables that are archived:

Technical programs and OSS notes

Write program: MM_ACCTIT_WRI

Delete program: MM_ACCTIT_DEL

Read program: none. Only write and delete.

Relevant OSS notes:

Application specific customizing

MM_ACCTIT has no application specific customizing.

Executing the write run and delete run

In transaction SARA, MM_ACCTIT select the write run:

Select your data, save the variant and start the archiving write run.

After the write run is done, check the logs. MM_ACCTIT archiving has high speed and high percentage of archiving (up to 100%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data archiving: CO line items

This blog will explain how to archive CO line items via object CO_ITEM. Generic technical setup must have been executed already, and is explained in this blog.

Object CO_ITEM

Go to transaction SARA and select object CO_ITEM.

Dependency schedule (no dependency):

Tables that are archived:

Most important:

  • COEP: CO line items
  • COBK: CO object document header

Technical programs and OSS notes

Write program: CO_ITEM_WRI

Delete program: CO_ITEM_DEL

Read program: RKCOITS4

Relevant OSS notes:

Application specific customizing

In the application specific customizing for CO_ITEM you have to set the residence time in months per CO type:

Executing the write run and delete run

In transaction SARA, CO_ITEM select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes date range. This is needed for data retrieval later on.

After the write run is done, check the logs. CO_ITEM archiving has average speed, but high percentage of archiving (up to 100%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

To avoid reporting issues, see OSS note 2676572 – SARI/KSB5 Archived documents not displayed in line item report, fill the archiving structure SAP_CO_ITEM_001.

Data archiving: purchase documents

This blog will explain how to archive purchase orders and purchase documents via object MM_EKKO. Generic technical setup must have been executed already, and is explained in this blog.

Object MM_EKKO

Go to transaction SARA and select object MM_EKKO.

Dependency schedule:

No dependencies.

Main tables that are archived:

  • EKKO: purchase document header
  • EKPO: purchase document item

Technical programs and OSS notes

Pre-processing program: RM06EV70

Write program: RM06EW70

Delete program: RM06ED70

Read program: RM06ER30

Relevant OSS notes:

Application specific customizing

In the application specific customizing for MM_EKKO you can maintain the document retention time settings:

You have to set the residence time per purchase order type:

Preprocessing

In transaction SARA, MM_EKKO select preprocessing:

There are quite some reasons why a purchase order cannot be archived.

Executing the write run and delete run

In transaction SARA, MM_EKKO select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes the purchasing group and year. This is needed for data retrieval later on.

After the write run is done, check the logs. MM_EKKO archiving has average speed, and medium percentage of archiving (50 to 90%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

For MM_EKKO start the read via SARA:

Then select the archive file(s).

Result is in a simple list.

If you setup the archiving infostructures, the popup with the files will be skipped.

Data archiving: WM transfer requirements and orders

This blog will explain how to archive WM transfer orders and requirements via objects RL_TA and RL_TB. Generic technical setup must have been executed already, and is explained in this blog.

Objects RL_TA and RL_TB

Go to transaction SARA and select object RL_TA and RL_TB.

Dependency schedule (no dependencies for both):

Main tables that are archived:

  • LTAK (transfer order header)
  • LTAP (transfer order item)
  • LTBK (transfer requirement header)
  • LTBP (transfer requirement item)

Technical programs and OSS notes

RL_TA:

Write program: RLREOT00S

Delete program: RLREOT10

Read program:RLRT0001

RL_TB:

Write program: RLREOB00S

Delete program: RLREOB10

Read program: RLRB0001

Relevant OSS notes:

Application specific customizing

RL_TA and RL_TB don’t have application specific customizing.

Typically RL_TA and RL_TB will yield 90 to 100% documents that can be archived.

Executing the write run and delete run

In transaction SARA, RL_TA or RL_TB select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes sales warehouse and year. This is needed for data retrieval later on.

After the write run is done, check the logs. RL_TA and RL_TB archiving has average speed, and a high percentage of archiving (up to 90 to 100%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria for transfer orders:

Start the data retrieval program and fill selection criteria for transfer requirements:

In the popup screen select the wanted archive files.

Data archiving: CO Order data

This blog will explain how to archive CO Order data via object CO_ORDER. Generic technical setup must have been executed already, and is explained in this blog.

Object CO_ORDER

Go to transaction SARA and select object CO_ORDER.

Dependency schedule:

This means you must first archive relevant purchase requisitions, purchaser orders and financial documents relating to the CO order..

Tables that are archived:

Most important:

  • COEP: CO line items

Technical programs and OSS notes

Pre-processing program: RKOREO01

Write program: RKAARCWR

Delete program: RKAARCD1

Read program: RKAARCS1

Relevant OSS notes:

Application specific customizing

In the application specific customizing for CO_ORDER you have to set two retention periods per CO order type:

Residence time 1 determines the time interval (in calendar months) that must elapse between setting the delete flag (step 1) and setting the deletion indicator (step 2).

Residence time 2 determines the time (in calendar months) that must elapse between setting the deletion indicator (step 2) and reorganizing the object (step 3).

Executing the pre-processing run

The pre-processing run will set the deletion indicator for the CO Orders:

The output is a list of orders that are processed, and if not processed, the reason of blocking is written down.

Executing the write run and delete run

In transaction SARA, CO_ORDER select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes date range. This is needed for data retrieval later on.

After the write run is done, check the logs. CO_ORDER archiving has average speed, but high percentage of archiving (up to 100%). Reason is that all filtering and checking is already done by the pre-processing program.

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

Result is a list. From the list double click on the order. You can see the order now in normal GUI layout.

FIORI app for monitoring data archiving jobs

SAP has delivered diverse apps for basis administrators.

This blog will explain about the data archiving batch job monitoring FIORI app.

For generic data archiving technical setup: read this blog.

Activating the app for monitoring data archiving jobs

The full activation manual is published on the FIORI reference library.

Short manual:

  • Activate SICF service bas_ilm_jobmon
  • Activate ODATA service ILM_JOB_MONITOR_SERVICE
  • Manually add the tile in your catalog (use edit home page and than add the app)

Using the app display data archiving jobs

The main FIORI app tile will already show the amount of failed jobs:

When you open the app the overview screen comes:

On the left hand side you can choose the archiving object. On the right hand side you can see the last archiving jobs for the selected object.

When you click on a job, you can see the details per job:

There are tabs for the job results, job log details and application log.

Bug fix notes

Bug fix OSS notes:

Technical clean up for Solution Manager

This blog will focus on technical clean up for SAP solution manager.

Questions that will be answered are:

  • How can I reduce size of fast growing tables in SAP solution manager?
  • Which functions of SAP solution manager are in use or were in use?

SAP Readiness check for Cloud ALM

SAP Cloud ALM will replace SAP solution manager. You can run the SAP Readiness check for Cloud ALM to determine which functions of solution manager you use or were in use in the past.

Technical clean up general

SAP solution manager is a netweaver system which also used BI functions. For the general tables read the Technical clean up blog and for the BI parts, the BI clean up blog.

Solution manager specific clean up

A good starting point for clean up is OSS note 2549260 – Certain tables in Solution Manager system grow fast. This covers the most common use cases.

Special use cases are list below.

BI clean up

For BI specific clean up of solution manager, read OSS note2053261 – Solution Manager: HANA Database Cubes are not reorganized.

Custom code management clean up

Check OSS note 2790429 – Custom Code Management: Deletion Report for SP05 and higher. Then run program RAGS_CC_DATA_DELETION.

LMDB clean up

Table LMDB_P_CHANGELOG can grow very large. Note 1695927 – Cleaning up the change history in the LMDB explains to run program RLMDB_CLEAR_CHANGELOG for clean up.

CUP data

If table CUP_USAGE_DATA grows fast, follow the instructions from OSS note 2150720 – CUP enabled by default with automated cleanup process for outdated statistics.

E2E diagnostics data

If table SMD_HASH_TABLE is growing fast, check OSS note 1480588 – ST: E2E Diagnostics – BI Housekeeping – Information to clean up the data.

Technical monitoring data

If you are using Solution manager to do technical monitoring, the instructions for data reduction are listed in this OSS note: 2345041 – How to Reduce and Maintain Table Growth in Technical Monitoring for Solution Manger 7.1 & 7.2.

If you set up the monitoring in the past, but are not using it, best to switch it off. If you are actively using it, consider moving it to SAP Focused Run, which is far superior. More on Focused Run in this blog series.

Large SWJ_CONT table

If table SWJ_CONT is growing, you can run program RSPPF_SWJCLEAN for clean up. See note 1890845 – Removal of unnecessary entries from SWJ_CONT.