Downloading and implementing new versions of OSS notes
SAP regularly updates its own OSS notes. To check in your system if there are new updates for OSS notes relevant to you go to transaction SNOTE. Then choose “Goto -> SAP Note Browser ->Execute (F8)”, and then choose “Download Latest Version of SAP Notes” in the application toolbar. This will download all the latest versions. Check for the status “Obsolete version implemented” in the implementation state column.
Activation of inactive objects after implementing OSS note
In rare cases after implementing an OSS note some of the ABAP objects are in an inactive state. To activate them, select the menu SAP note and then Activate SAP note manually.
Or you can run program SCWB_NOTE_ACTIVATE to activate the coding of the note:
Start with reading the PDF document attached to OSS note 2187425: TCI for customer. This contains the exact instructions to enable TCI based correction instructions.
The TCI only recently has a rollback function. Please check if you can update/patch to the version where the rollback works. See the PDF document in OSS note 2187425 on the undo function.
Applying TCI note
There are 2 ways to upload TCI note.
Basis way: you will need SPAM access rights and 000 actions are involved. Upload the TCI file in SPAM in client 000. Then apply the note via SNOTE in main client. The note tool will ask you to confirm to use the TCI mechanism.
ABAP way: you will need SPAM access rights. In transaction SNOTE use menu option Goto / Upload TCI. After uploading the file, choose Decompress. Now apply the note via SNOTE. The note tool will ask you to confirm to use the TCI mechanism.
In newer netweaver versions SNOTE is revamped. You can apply this version earlier if you want to use it. Read more on the SNOTE revamp in this blog.
Applying notes in shadow during upgrade
In rare cases you might need to apply and OSS note in the shadow system during a system upgrade. Basis team will usually use the SUM tool. Applying notes to shadow during upgrade can be needed to solve upgrade stopping bugs.
Always handle with care. If you are not experienced with upgrades, let a senior handle it.
HANA data aging is a method to reduce the memory footprint of the HANA in-memory part without disturbing the end users. It is not reducing your database size.
This blog will answer following questions:
What is HANA data aging?
How to switch HANA data aging on?
How to set up HANA data aging for technical objects?
What about data aging for functional objects?
What is HANA data aging?
HANA data aging is an application method to reduce the memory footprint based on application data logic. It is not a database feature but an application feature. The goal of HANA data aging is not to reduce the database size (which it is not doing), but to reduce the actual memory footprint of the HANA in-memory database.
Let’s take idocs as example: the idocs that are processed ok you need to keep in database for an agreed amount of time before business or audit allows you to delete them. Lets say you can only delete after 1 year. Every action on idocs now means that full year of idoc content is occupying main memory. For daily operational tasks you normally only need 2 months of data in memory and rest you can accept that it will take bit longer to read from disc into memory.
This is exactly what data aging is doing: you partition the data into application logic based chunks. In this case you can partition the idoc data per month and only have last 2 months in active memory. The other 10 months are on disc only. Reading data of last 2 months is still fast as usual. When having to report on the 10 months on disc, the system first needs to load from disc into memory; will be slower.
To reduce database itself, you would still need to do data archiving.
Advantage of the data aging is that the more expensive memory footprint costs can be reduced in such a way that the end users are not hampered. Data aging is transparent for them. With data archiving the users will always need to select different transaction and data files.
How to switch on data aging?
To switch on data aging on system level you need to do 2 things:
Set the parameter abap/data_aging to on in RZ11
In SFW5 switch on the switch called DAAG_DATA_AGING
This only enables the system for data aging.
Data aging switch on for technical object: example for application logging
With transaction DAGADM you can see the administration status of the data aging object. You first see red lights that the objects are not activated for data aging.
Per object you have extra transactions (which unfortunately differ per object…) to set the retention times. For application logging this is transaction SLGR. Here we choose in this example to data age all log after 180 days:
The advantage of this tailoring is that you could only age some of the objects if you want.
The transaction and OSS note for each of the objects can be found on this SAP blog.
Next step is to setup partitions for the object. To do this start transaction DAGPTM and open the object you want to partition:
Initial screen is in display mode. Hit change button. On the bottom right side hit the Period button (Selection Time Period). In the popup enter the desired start date, time buckets (months, years) and amount of repetitions:
Now the partitions are defined. To execute the partitioning hit the execute button to start the partitioning in the background. Wait until the job finishes. Before running this on productive system check the runtime first on non-productive system with about same data size if possible.
After partitioning the screen should look like this:
Now we can activate the object in transaction DAGADM. Select the object and press the activate button. Popup appears to assign the object to existing data aging or new group:
The data aging run will be done per group.
To start the actual data aging run start transaction DAGRUN.
Here you can schedule a new run with the Schedule new run button.
To see the achieved results of the data aging go to transaction DAGADM and select the object. Then push the button View current/Historical data.
Functional data aging objects
Functional data archiving objects exist as well for Financial documents, sales orders, deliveries, etc. The full list and minimal application version can be found on this SAP blog.
Words of caution for functional archiving:
The technical archiving objects are more mature in coding and usage. They are used in productive system and are with lesser bugs than the technical objects
Before switching on a functional data aging object you need to prepare your custom ABAP code. If they are not adjusted properly to take the partitions with the date selections (or other application selection mechanism) into account all benefits are immediately lost. A Z program that reads constantly into full history will force a continuous read of historical partitions….
Using SM36 you can plan all SAP standard jobs (which include a lot of clean up jobs for spools, dumps, etc) via the button Standard Jobs.
By hitting the button Default scheduling in an initial system, or after any upgrade or support package, the system will plan its default clean up schedule.
S4HANA has different set up of standard jobs. See blog.
Clean up of old idocs
Idoc data is stored in EDI* tables. Largest tables are usually EDI40, EDIDS and EDIDC.
Old idocs can be deleted using transaction WE11.
In batch mode you can schedule it as program RSETESTD.
In the bottom of the selection screen are the technical options:
The idoc deletion job can fail if there is too many data to process. If they happens remove the 4 tick boxes here and use the separate deletion programs: RSWWWIDE, RSARFCER, SBAL_DELETE and RSRLDREL2. These 5 combined programs will delete the same, but run more efficiently. This procedure is also explained in OSS note 1574016 – Deleting idocs with WE11/ RSETESTD.
Table logging is stored in table DBTABLOG (general information on table logging can be found in this blog). Deletion can be done using transaction SCU3 and then choosing the option Edit/Logs/Delete, or by using program RSTBPDEL.
Application logging (SLG1) is stored in tables BALDAT and BALHDR (for general information on the use of the application log, read this blog). Deletion can be done using transaction SLG2 or by using program SBAL_DELETE.
The last options to fine tune the number of logs per job and the commit counter setting do not appear by default. Select menu option Program/Expert mode first.
Workflows are stored in many tables starting with SW*.
You can delete work item history with transaction SWWH or program RSWWHIDE.
This clean up will only do the work item technical history and not the workflow itself. If workflow itself can be deleted or is to be archived is a functionality decision that the depend on the business and audit needs.
The workflow deleting program can create large amount of spools. If this is not wanted use the NULL printer.
If your business is using the GOS (generic object services) to see workflows linked to a business document, and they cannot retrieve the archived work item, please follow carefully the instructions in OSS note 2356250 – Not able to view archived workflows.
If you want to delete the actual workflow you have to run program RSWWWIDE.
Take care that before deleting workflows you have checked that these are not needed for audit or financial proof. Some workflows will contain approval steps with a recording of who approved what at which time.
Test this first and check with the data owner that the documents are no longer needed.
For a full explanation on deleting SAP office documents (including all the pre-programs to run) and bug fix notes: read this dedicated blog on SAP office document deletion.
Migrating SAP office documents to content server.
Usually the business will not allow deletion of SAP office document (unless they are very old). You might be ending up with a SOFFCONT1 table of 100 GB or more.
In stead of deleting SAP office documents, you can also migrate them to a content server. Read more in this blog.
Change documents do contain business data changes to business objects. If tables CDHDR and CDPOS grow very big, you start with an age analysis. You can propose to business to delete change documents older than 10 years. 10 years is the legal time you need to keep a lot of data. Deletion is done via program RSCDOK99. If business does not want to delete, but keep the data in the archive, you can use data archiving object CHANGEDOCU. Retrieval of archived change documents is via transaction RSSCD100.
If you have large SYS_LOB tables, most likely these are occupied with attachments. Consider setup of SAP content server (see blog) and then migrate the documents from the SAP database to the content server (see blog).
To analyze SYS_LOB tables, follow the instructions in this dedicated blog.
You can schedule program RSAUPURG or program RSAU_FILE_ADMIN with the right variant to delete old Audit log data:
Before deleting audit log data, first agree with your security officer on the retention period. More on audit log in this blog.
Clean up of user role assignment data
If you have an older system, you will find that many users will have double roles assigned, or roles with validity dates in the past. This will lead to large amount of entries in table AGR_USERS. You can clean up by compressing this data with program PRGN_COMPRESS_TIMES. Read more in this blog.
Enqueue and lock table issue analysis can be bit hard form time to time. They don’t regularly occur and when they do, they can have big system performance impact.
This blog will explain:
How to detect enqueue issues?
How to quickly analyze the enqueue issues?
Detecting enqueue issues?
Enqueue issues can be easily detected in SM50 and SM66 if work process get stuck long time with status ENQ.
First analysis on enqueue issues
The first analysis on enqueue issues can be done in transaction code SM12. From the menu now select the options Extra / Diagnosis and Extra / Diagnosis in Update. This will run the diagnostics on the enqueue handling.
Result looks like:
To get statistics on the enqueue processing, on the same SM12 start screen select the menu Extra / Statistics.
Deeper analysis on enque issues
For deeper analysis on the lock issues, you might need to switch to the detailed error handling part of SM12. This is a hidden feature. To switch it on you must have the correct authorization (S_ENQUE with ALL in the activities). Switching can be done by keying in the word TEST in the GUI command line (where you key in the tcodes and the /n etc).
Now you will see an extra menu called Error Handling.
From this menu you can directly launch program RSMONENQ_PERF via the menu option Error handling/Diagnosis environment. This programs will check the performance of the enqueue handling:
The Error Handling menu will also give you option to trace the enqueue processing.
Lock table overflow can happen when more locks are set by programs then the available allocated memory for the locks. In a normal system this will hardly occur. But during a conversion that is operating on massive amount of data (sometimes even using parallel jobs) this lock table overflow can happen. If it happens this will effect ALL users. They will get lock table overflow error and cannot save their work. More then enough reason to have large conversion tested first on a test system with production like sizing and settings.
If you are running an older ECC system, the lock table settings in the profile parameters might be set quite low. Newer upgraded ECC system can handle much higher values of the enque/table_size parameter.