Data archiving: sales orders

This blog will explain how to archive sales orders via object SD_VBAK. Generic technical setup must have been executed already, and is explained in this blog.

Object SD_VBAK

Go to transaction SARA and select object SD_VBAK.

Dependency schedule:

In case you use production planning backflush, you must archive those first. Then material documents, shipment costs (if in use), SD transport (if in use), deliveries (if in use), purchase orders and purchase requisitions related to the sales order.

Main tables that are archived:

  • NAST (for the specific records)
  • VBAK (sales order header)
  • VBAP (sales order item)
  • VBEP (sales order schedule line data)
  • VBFA (for the specific records)
  • VBOX (SD Document: Billing Document: Rebate Index)
  • VBPA (for the specific records)
  • VBUP (sales order status data)

Technical programs and OSS notes

Preprocessing program: S3VBAKPTS

Write program: S3VBAKWRS

Delete program: S3VBAKDLS

Read program: S3VBAKAU

Relevant OSS notes:

Application specific customizing

In the application specific customizing for SD_VBAK you can maintain the document retention time settings:

Executing the preprocessing run

In transaction SARA, select SD_VBAK. In the preprocessing run the documents to be archived are prepared:

Check the log for the results:

Typically SD_VBAK will yield 30 to 70% documents that can be archived.

Executing the write run and delete run

In transaction SARA, SD_VBAK select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes sales organization/shipment point and year. This is needed for data retrieval later on.

After the write run is done, check the logs. SD_VBAK archiving has average speed, but not so high percentage of archiving (up to 40 to 90%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

In the second screen select the archive files. Now wait long time before data is shown.

For faster retrieval, setup data archiving infostructures SAP_SD_VBAK_001 and SAP_SD_VBAK_002. These are not active by default. So you have to use transaction SARJ to set them up and later fill the structures (see blog).

Data archiving: SD invoices

This blog will explain how to archive SD invoices via object SD_VBRK. Generic technical setup must have been executed already, and is explained in this blog.

Object SD_VBRK

Go to transaction SARA and select object SD_VBRK.

Dependency schedule:

In case you use production planning backflush, you must archive those first. Then material documents, shipment costs (if in use), SD transport (if in use) and deliveries (if in use).

Main tables that are archived:

  • NAST (for the specific records)
  • VBFA (for the specific records)
  • VBOX (SD Document: Billing Document: Rebate Index)
  • VBPA (for the specific records)
  • VBRK (invoice headers)
  • VBRP (invoice line items)
  • VBUK (invoice status)

Technical programs and OSS notes

Preprocessing program: S3VBRKPTS

Write program: S3VBRKWRS

Delete program: S3LIKPDLS

Read program: S3VBRKAU

Relevant OSS notes:

Application specific customizing

In the application specific customizing for SD_VBRK you can maintain the document retention time settings:

Executing the preprocessing run

In transaction SARA, select SD_VBRK. In the preprocessing run the documents to be archived are prepared:

Check the log for the results:

Typically SD_VBRK will yield 30 to 70% documents that can be archived.

Executing the write run and delete run

In transaction SARA, SD_VBRK select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes sales organization/shipment point and year. This is needed for data retrieval later on.

After the write run is done, check the logs. SD_VBRK archiving has average speed, but not so high percentage of archiving (up to 40 to 90%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

In the second screen select the archive files. Now wait long time before data is shown.

For faster retrieval, setup data archiving infostructures SAP_SD_VBRK_001 and SAP_SD_VBRK_002. These are not active by default. So you have to use transaction SARJ to set them up and later fill the structures (see blog).

Data archiving: deliveries

This blog will explain how to archive deliveries via object RV_LIKP. Generic technical setup must have been executed already, and is explained in this blog.

Object RV_LIKP

Go to transaction SARA and select object RV_LIKP.

Dependency schedule:

In case you use production planning backflush, you must archive those first. Then material documents, shipment costs (if in use) and SD transport (if in use).

Main tables that are archived:

  • LIPK
  • LIPS
  • NAST (for the specific records)
  • VBFA (for the specific records)
  • VBPA (for the specific records)

Technical programs and OSS notes

Preprocessing program: S3LIKPPTS

Write program: S3LIKPWRS

Delete program: S3LIKPDLS

Read program: S3LIKPAU

Relevant OSS notes:

Application specific customizing

In the application specific customizing for RV_LIKP you can maintain the document retention time settings:

Executing the preprocessing run

In transaction SARA, select RV_LIKP. In the preprocessing run the documents to be archived are prepared:

You must run the program twice: for inbound and outbound deliveries.

Check the log for the results:

Typically RV_LIKP will yield 30 to 70% documents that can be archived.

Executing the write run and delete run

In transaction SARA, RV_LIKP select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes sales organization/shipment point and year. This is needed for data retrieval later on.

After the write run is done, check the logs. RV_LIKP archiving has average speed, but not so high percentage of archiving (up to 40 to 90%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

In the second screen select the archive files. Now wait long time before data is shown.

For faster retrieval, setup data archiving infostructures SAP_RV_LIKP_001 and SAP_RV_LIKP_002. These are not active by default. So you have to use transaction SARJ to set them up and later fill the structures (see blog).

Data archiving: material documents

This blog will explain how to archive material documents via object MM_MATBEL. Generic technical setup must have been executed already, and is explained in this blog.

Object MM_MATBEL

Go to transaction SARA and select object MM_MATBEL.

Dependency schedule:

In case you use production planning backflush, you must archive those first.

Main tables that are archived:

  • MKPF
  • MSEG
  • NAST (for the specific records)

Technical programs and OSS notes

Write program: RM07MARCS

Delete program: RM07MADES

Read program: RM07MAAU

Relevant OSS notes:

Application specific customizing

In the application specific customizing for MM_MATBEL you can maintain the document lifetime settings:

Executing the write run and delete run

In transaction SARA, MM_MATBEL select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes plant and year. This is needed for data retrieval later on.

After the write run is done, check the logs. MM_MATBEL is fast archiving and has high percentage of archiving (up to 100%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

In the second screen select the archive files. Now wait long time before data is shown.

Faster data retrieval is via archive explorer transaction SARE (for the archive explore the infostructures must be filled first, see this blog):

Fill out the document criteria:

From the result list double click on the line and jump to MIGO transaction to view the archived document.

Data archiving: archiving infostructures

Several retrieval functions in SAP data archiving require the setup of archiving infostructures.

More on archiving data retrieval in general can be found in this blog.

Activation of infostructures

Using transaction SARJ you can configure the infostructures for archiving:

Using display, you can see the fields that are put in archiving infostructure:

On the first screen you can activate the infostructure by pushing the Activate button.

Activation is only activating the structure for future archive runs, not for past runs. For these you need to fill the structures first.

Filling the structures for existing archive files

Filling the structures for existing archive files is bit hidden. Goto transaction SARJ and select the object. Now choose menu Environment and choose Fill Structure.

In the next screen select the files you want to fill (these are normally yellow):

After selection, choose the Fill Structures button to start the batch job filling them. Don’t select too much. It is intense on reading the archive.

Green ones are done. Red ones have failed.

Most common cause for failures is that the variant for the WRITE program was set so the same document got archived twice into different archive files.

What can be done? If it is OK to have the same document in different files, you can ignore the archive session entries with error in SARI.

To avoid having duplicate keys in the infostructure in future, you can add the filename as an extra key field to the infostructure. This can be done as follows:

– SARJ ->Infostructure -> Display
– Technical data
– Change the field “File Name Processing” from ‘D’ to ‘K’

Archiving infostructure status

Use transaction SARI to check the status of the archiving infostructures:

From here you can go to the Status to check the status of the files.

Archive explorer jumps to SARE transaction for the explorer.

Customizing jumps to the SARJ transaction described above.

Data archiving: data retrieval

When you perform data archiving, from time to time you need to give support on data retrieval issues.

This blog will explain some of the general data retrieval concepts.

Questions that will be answered in this blog are:

  • How does single record retrieval work?
  • How can I use the archive explorer?
  • How can I get a list of data from the archive?

Single record retrieval

Single record retrieval is different per archiving object.

Some objects (like FI_DOCUMNT) are nicely integrated. In FB03 the system will check first database, then look into the archive inforecords to find if the document is archived. And then it will show the document in same layout.

Most objects have archive read program which you can find in SARA:

Now run the read program:

And fill out the record(s) you need:

Now you need to select the data files:

If you didn't label your files correctly, you need to select them all, which makes data retrieval slow.

Results are shown:

Results might look ok, or very basic. This is different per archiving object.

Use of archive explorer for table level

An alternative way is the use of the archive explorer. This will give details on table level.

Start transaction SARE:

Fill out the required object and archive infostructure. In this case we used change document. In the second screen fill the object:

Now you can see list of changes:

Double click on the record to see the tables:

Double clicking on the table will give the actual table line content.

Filling infostructures

More on infostructures can be read in this dedicated blog.

List transactions

Some transactions (especially in FICO domain) have integrated reporting with the data archive. We will use transaction FBL3N as example.

Start FBL3N:

Then click on Data Sources, include Archive, and select the needed files:

If you didn't label your files correctly, you need to select them all, which makes data retrieval slow.

FIORI app for monitoring data archiving jobs

SAP has delivered diverse apps for basis administrators.

This blog will explain about the data archiving batch job monitoring FIORI app.

Activating the app for monitoring data archiving jobs

The full activation manual is published on the FIORI reference library.

Short manual:

  • Activate SICF service bas_ilm_jobmon
  • Activate ODATA service ILM_JOB_MONITOR_SERVICE
  • Manually add the tile in your catalog (use edit home page and than add the app)

Using the app display email transmissions

The main FIORI app tile will already show the amount of failed jobs:

When you open the app the overview screen comes:

On the left hand side you can choose the archiving object. On the right hand side you can see the last archiving jobs for the selected object.

When you click on a job, you can see the details per job:

There are tabs for the job results, job log details and application log.

Bug fix notes

Bug fix OSS notes:

Data archiving: reducing amount of parallel batch jobs

When executing data archiving you have to be acting careful. The data archiving write and delete processes can be consuming a lot of CPU power from the database. Also, if you are not careful you might, by accident, claim all background processes. This blog will explain how to limit the amount of batch jobs used for data archiving. The data archiving run process itself is described in this blog.

Questions that will be answered in this blog are:

  • How can I limit the amount of deletion jobs?
  • How can I restrict the archiving jobs to run on a specific application server only?

Limit amount of deletion jobs

When the write run of data archiving is finished, this can have delivered many files. If you are not careful with the deletion, you select all files and each file will start a deletion run. This will consume a lot of CPU power on database level, since the deletion run will fire many DELETE statements to the database in rapid sequence. Also you might consume all batch jobs, leaving no room for any business batch job.

In stead of running the deletion from SARA, you can also run the deletion via program RSARCHD:

With this example, MM_EKKO files will be deleted. Maximum of 50 files from 1 archiving run will be processed, with a maximum of 2 deletion batch jobs running at the same time.

The general OSS note for this program is 133707 – Data archiving outside transaction SARA.

Relevant OSS notes bug fix notes:

General application server restrictions via batch job server group

In SM61 you can setup a special batch job server group. Here can assign a single application server for you data archiving batch job processing. We assume here you created a group called DATA_ARCH.

In SARA you can now goto the general data archiving settings:

Now you can link the batch job server group:

With the button JobClasses you can specify the job priorities per data archiving function:

A = high priority, C = low priority. The above screen shot is an example.

The second part of OSS note 2269004 – How to reduce parallel archiving jobs on Integration Engine describes the procedure as well. The first part of the note is only relevant for SAP PI.

Data archiving improvement notes 2018

In 2018 SAP ran an improvement project which resulted into a set of OSS notes that will make data archiving more robust and easy.

All of these notes come with manual work. Select the ones really useful.

Archiving write process improvements

Write variant maintenance has been made easier by allowing copying of variants (useful if you have many plants and company codes and want to store each one in different archive file): 2520093 – Archive administration: Enhanced variant maintenance (writing, preprocessing, and postprocessing).

To be able to detail the written file name of the archive file implement this oss note: 2637105 – Print list for archiving write jobs: Placeholders for session numbers, archive file key in title.

Archiving storage process improvements

Archiving system technical check button is available in OAC0, but not in SARA. After applying this note you can also check it in the technical settings in SARA: 2599263 – Connection test for storage systems for archiving object.

Deletion process improvements

To be able to quickly continue with interrupted archiving sessions apply this note 2520094 – Continue: Information on existence of interrupted or incomplete archiving sessions.

This note will implement checks to warn you about uncompleted previous store and delete runs: 2586921 – Run selection for deletion: Information about the existence of unstored archive files.

Some archiving object use the AIS (archiving information system) to enable the end user a quick retrieval of archiving information. This note will give warning before start of deletion if the AIS is note active for the object: 2624077 – Starting delete jobs: Check for active info structures.

Archiving overview and logging improvement

To get a better overall overview of all logs apply OSS note 2433546 – Archive administration logs: Information about errors in hierarchy display. Showing only success message is possible after applying OSS note 2855641 – Logs: New option “Success Messages Only” for detail log.

Direct navigation to Archive File Browser: apply OSS note 2544517 – Archive administration: Direct navigation to ArchiveFileBrowser. This note only gives you a link. You can already start the archive file browser using transaction AS_AFB:

Archive file browser

Note 2823924 – Archive File Browser: Messages that do not belong to the Archive File Browser are output solves a bug in the Archive File Browser.

SAP database growth control: data archiving business discussions

This blog addresses the main challenge in SAP data archiving for functional object: the discussions with the business.

This blog will give answers to the following questions:

  • When to start data archiving discussion with the business?
  • How to come to good retention periods?
  • What are arguments for not archiving certain data?

Data archiving discussion with the business

Unlike technical data deletion, functional data archiving cannot be done without proper business discussion and approval.

Depending on your business several aspects for data are important:

  • Auditing and Sox needs
  • Tax and legal retention periods
  • Product data requirement
  • And so on…..

Here are some rules of thumb you can use before considering to start up the business discussions about archiving:

Rule of thumb 1: the system is pretty new. At least wait 3 years to get an insight into which tables are growing fast and are worth to investigate for data archiving.
Rule of thumb 2: if your system is growing slowly, but the infrastructure capabilities grow faster: only perform technical clean up and don't even start functional data archiving.
Rule of thumb 3: if you are on HANA: check if the data aging concept for functional objects is stable enough and without bugs. Data aging does not require much work, it is only technical and it does not require much business discussions. Data retrieval from end user perspective is transparent.

Data analysis before starting the discussion

If your system is growing fast and/or you are getting performance complaints, then you need to do proper data analysis before starting any business discussion.

Start with proper analysis on the data. Use the TAANA tool to get insights into the data: how is the distribution of data per document type, per year, per plant/company code etc. If you want to propose retention period of let’s say 5 years, you can use the TAANA results to show what percentage of data you can move out of the database.

Secondly: if you have an idea on which data you want to archive, first execute a trial run on a recent production copy. There might be functional blocks that prevent you from archiving data (like not closed documents).

Third important factor is the ease of data retrieval. Some object have a nice simple data retrieval function, and some are really terrible. If the retrieval is good, the business will more easily accept a shorter retention period. Read more on technical data retrieval in this blog.

As last step you can start the business case: how much data will be saved (and how much money hence will be save) and how much performance would be gain. And how much time is needed to be invested for setting up, checking (testing!) and running the data archiving runs.

In practice data archiving business case is only present in very large systems of 5 TB and larger. This sizing tipping point changes in time as hardware gets cheaper and hourly manpower costs go up.

The discussion itself

Take must time in planning for the discussion itself. It is not uncommon that archiving discussions take over a year to complete. The better you are prepared the easier the discussion. It also helps to have a few real performance pain points to get solved via data archiving. There is normally a business owner for this pain point who can help push data archiving.