Skip to main content

Must to do before restart after HANA Crash


Full System Information Dump refers to a practice in which, when a critical system is down, instead of immediately restarting it, you first gather and save essential log and trace files. This precaution is taken because restarting a system can overwrite crucial low-level logs or traces, making it harder to analyze the root cause of the issue later on.


SAP HANA supports this process by offering a feature called a full system information dump. With this dump, you can select and save specific logs, allowing you to preserve important information for later troubleshooting. By doing this, you ensure that you have the necessary data to investigate and address the problem effectively after restarting the SAP HANA database.


Additionally, in the Database Directory, you have the option to specify the credentials of the database user needed to access detailed information about an individual database. This step is essential unless a single sign-on mechanism is in place for that particular database.



Steps to do for collecting Full System information dump 


  • Start by locating the Alerts and Diagnostics card.

  • Click on the "Manage full system information dumps" link.

  • Select "Collect Diagnostics" and pick either 


Collect from Existing Files:


Purpose: This option is chosen when you want to gather diagnostic information for specific file types over a defined time period, typically the last seven days by default.

Include System Views: If you wish to include information from system views, you can select the option. However, note that if connected to the system database of a multiple-container system, only information from the system views of the system database is collected. Information from the system views of tenant databases is not collected.

Performance Impact: Collecting information from system views involves executing SQL statements, which might impact system performance. This option is not available in diagnosis mode, and the database must be online for its utilization.


Create from Runtime Environment:


Purpose: This option is selected when you want to limit the information collection to runtime environment (RTE) dump files.

Additional Configuration: You can configure the collection of dump files by specifying the number of sets to be collected, the interval at which RTE dump files are collected, the host(s), service(s), and section(s) from each selected service.

Processing Time: The system collects the relevant information and saves it to a ZIP file. This process may take some time, and it can be allowed to run in the background.

Multiple-Container System: If connected to the system database of a multiple-container system, information from all tenant databases is collected and saved to separate ZIP files.


 

  • In the pop-up window, choose the specific information items you wish to collect. Then, at the bottom-right corner, click on "Start Collecting."

  • Once all the data is gathered, you'll see the "fullsysteminfodump_<SID><DBNAME><HOST>_<timestamp>.zip" file in the collections table.


The other way to collect diagnostic information from command line 


fullSystemInfoDump.py is present in the python support file , We need to add these details when we are asking for support from SAP . The script is part of the SAP HANA server installation and can be executed directly from the command line. 


Command : python  fullSystemInfoDump.py –tenant <SID>


Other options to check :  


→    - - nosql : Excludes  collection of system view

→    - - days=DAYS : Collect trace files from these number of past days 

→    - - help


If the SQL-accessible option is available (with no --nosql specified), the script initiates the gathering of diagnostic information. In cases where SQL access is not possible, the script proceeds to collect support information without exporting data from system views.


Unless the --rtedump option is specified, the script collects all the mentioned file types; otherwise, it exclusively generates and gathers runtime environment (RTE) dump files.


Collecting Diagnostic file : https://me.sap.com/notes/1732157


Information that is collected in fullSystemInfoDump.py :  


Information is saved here : $DIR_GLOBAL/sapcontrol/snapshots. $DIR_GLOBAL typically points to /usr/sap/ /SYS/global.


Log file 

All information about what has been collected is shown as console output, and is written to a

file named log.txt, which is stored in the ZIP file.

Trace file 

Trace file for each host in the landscape $DIR_INSTANCE/<SAPLOCALHOST>/trace/


compileserver_alert_<SAPLOCALHOST>.trc

compileserver_<SAPLOCALHOST>.<...>.trc

daemon_<SAPLOCALHOST>.<...>.trc

indexserver_alert_<SAPLOCALHOST>.trc

indexserver_<SAPLOCALHOST>.<...>.trc

nameserver_alert_<SAPLOCALHOST>.trc

nameserver_history.trc

nameserver_<SAPLOCALHOST>.<...>.trc

preprocessor_alert_<SAPLOCALHOST>.trc

preprocessor_<SAPLOCALHOST>.<...>.trc

statisticsserver_alert_<SAPLOCALHOST>.trc

statisticsserver_<SAPLOCALHOST>.<...>.trc

xsengine_alert_<SAPLOCALHOST>.trc

xsengine_<SAPLOCALHOST>.<...>.trc


Database System Log File 

$DIR_INSTANCE/<SAPLOCALHOST>/trace/backup.log

$DIR_INSTANCE/<SAPLOCALHOST>/trace/backint.log


Database Configuration File

$DIR_INSTANCE/<SAPLOCALHOST>/exe/config/


compileserver.ini

daemon.ini

executor.ini

extensions.ini

filter.ini

global.ini

indexserver.ini

inifiles.ini

localclient.ini

mimetypemapping.ini

nameserver.ini

preprocessor.ini

scriptserver.ini

statisticsserver.ini

validmimetypes.ini

xsengine.ini

crashdump file , record performance trace file 

Crashdump files for services are collected unabridged.


Performance trace files with the suffix *.tpt are collected unabridged.

Runtime dump for each index server 

For each index server, an RTE dump file containing information about threads, stack contexts,

and so on is created and stored in the file

indexserver_<SAPLOCALHOST>_<PORT>_runtimedump.trc.


Performance Trace Files

Performance trace files with the suffix *.tpt are collected unabridged

Kerberos Files



/etc/krb5.conf

/etc/krb5.keytab


System Views

Only the first 2,000 rows are exported.


SYS.M_CE_CALCSCENARIOS WHERE SCENARIO_NAME LIKE '%_SYS_PLE%'

SYS.M_CONNECTIONS with CONNECTION_ID > 0

SYS.M_DATABASE_HISTORY

SYS.M_DEV_ALL_LICENSES

SYS.M_DEV_PLE_SESSIONS_

SYS.M_DEV_PLE_RUNTIME_OBJECTS_

SYS.M_EPM_SESSIONS

SYS.M_INIFILE_CONTENTS

SYS.M_LANDSCAPE_HOST_CONFIGURATION

SYS.M_RECORD_LOCKS

SYS.M_SERVICE_STATISTICS

SYS.M_SERVICE_THREADS

SYS.M_SYSTEM_OVERVIEW

SYS.M_TABLE_LOCATIONS

SYS.M_TABLE_LOCKS

SYS.M_TABLE_TRANSACTIONS

_SYS_EPM.VERSIONS

_SYS_EPM.TEMPORARY_CONTAINERS

_SYS_EPM.SAVED_CONTAINERS

_SYS_STATISTICS.STATISTICS_ALERT_INFORMATION

_SYS_STATISTICS.STATISTICS_ALERT_LAST_CHECK_INFORMATION

_SYS_STATISTICS.STATISTICS_ALERTS

_SYS_STATISTICS.STATISTICS_INTERVAL_INFORMATION

_SYS_STATISTICS.STATISTICS_LASTVALUES

_SYS_STATISTICS.STATISTICS_STATE

_SYS_STATISTICS.STATISTICS_VERSION




topology.txt

All available topology information is exported to a file named topology.txt. It contains

information about the host topology in a tree-like structure.


Contains following information : nameserver role , Available database , Host , HANA build time and version , Number of CPUs and CPU type , Used Crypto library , Hardware Manufacturer , Amount of memory and swap , Network information , Important Linux parameter , OS provider , SAP HANA SID and instance number , Timezone 





When connected to the system database of a multiple-container system, the script exclusively collects information from the system views of the system database. Irrespective of the option setting, no information from the system views of tenant databases is gathered.


After collecting the dumps then only restart the Hana Database 


Comments

You might find these interesting

8 Must-Know Questions About Object Store on SAP Business Technology Platform

What is the problem that Object Store solves ? Modern enterprise systems increasingly deal with massive volumes of unstructured data such as documents, logs, media files, and backups. Traditional relational databases are not optimized for such workloads. What is Object Store ? Object storage—commonly referred to as blob storage—addresses this gap by providing scalable, durable, and cost-efficient storage for unstructured data. Object storage is a storage architecture designed to manage unstructured data as discrete units called objects.  Each object consists of: Binary data (file content) : Image , File etc Metadata (descriptive attributes) : File size, Content type, Last modified timestamp, Storage class (hot, cool, archive) Unique identifier (key or URL) : unique path-like string used to locate a blob inside a bucket Unlike file systems or relational databases, object storage does not rely on hierarchical file structures or schemas. The SAP BTP Object Store service is a managed, ...