How to Enable the Slow Query Log in Mysql MongoDB & PostgreSQL

How to Enable the Slow Query Log in Mysql, Mongo Db & PostgreSQL

Activating the Slow Query Log in MySQL® or MariaDB proves to be a valuable resource for pinpointing performance and efficiency issues impacting your server. By detecting queries with prolonged execution times, you gain the ability to resolve these issues by reorganizing the triggering application or optimizing the queries for maximum efficiency.

To enable the Slow Query Log for MySQL or MariaDB: Log in to your server as the root user via SSH.
Open the my.cnf file with a text editor and add the following block of code under the mysqld section:

slow_query_log = 1
slow-query_log_file = /var/log/mysql-slow.log
long_query_time = 2

For MySQL versions 5.6 and earlier, utilize the log-slow-queries variable instead of the slow-query_log_file variable.


For MariaDB versions 10.11 and beyond, there’s a name change where the slow_query_log variable becomes the log_slow_query variable.


Similarly, in MariaDB versions 10.11 and later, the long_query_time variable undergoes a renaming to become the log_slow_query_time variable.

Create the /var/log/mysql-slow.log file and set its user as the mysql user. To do this, run the following commands:

touch /var/log/mysql-slow.log
chown mysql:mysql /var/log/mysql-slow.log

Restart MySQL or MariaDB. To do this, run the following command:

/usr/local/cpanel/scripts/restartsrv_mysql

Start monitoring the slow query logfile. To analyze and print the file’s summary, run the mysqldumpslow command. For example, to print all slow queries that the system previously recorded,

Run the following command:

mysqldumpslow -a /var/log/mysql-slow.log

Enable slow query log in mysql workbench

To disable or enable the slow query log or change the log file name at runtime, use the global slow_query_log and slow_query_log_file system variables. Set slow_query_log to 0 to disable the log or to 1 to enable it. Set slow_query_log_file to specify the name of the log file.

Enable slow query log in MongoDB

To capture and log MongoDB queries, you can leverage MongoDB’s inherent logging features and monitoring tools. MongoDB offers multiple options for tracking query performance and activity. Below are the steps to enable query logging and oversee slow queries:

MongoDB allows you to log queries to a file or the system log. To enable query logging, follow these steps:

Edit your MongoDB configuration file (mongod.conf), which is typically located in the /etc directory on Linux systems.
Add or modify the following settings in the configuration file:

# Enable logging to a file
systemLog:
  destination: file
  path: /var/log/mongodb/mongod.log
  logAppend: true

# Enable query logging (verbose mode)
operationProfiling:
  mode: slowOp
  slowOpThresholdMs: 100 # Adjust the threshold (in milliseconds) for slow queries

Monitor Slow Queries

Utilize MongoDB’s native profiling functionality to observe slow queries. Activate this feature by adjusting the operationProfiling setting in the configuration file, as illustrated earlier. Additionally, you can customize the threshold for slow queries by modifying the slowOpThresholdMs value.

To examine the slow queries captured by MongoDB, employ the db.currentOp() command within the MongoDB shell:

Third-party Monitoring Tools

MongoDB presents a range of third-party tools for advanced query analysis and performance insights. Notable options include:

MongoDB Atlas: This cloud-hosted service by MongoDB comes equipped with integrated monitoring and performance analysis features.


MMS (MongoDB Management Service): Serving as MongoDB’s historical monitoring service, MMS offers capabilities for performance and query analysis.


Ops Manager: MongoDB’s modern management platform includes tools for monitoring, automation, and advanced performance analysis.

Profiling for Slow Query Log in MongoDB

MongoDB provides profiling as a mechanism to log and analyze slow queries. Profiling can be configured to capture information about the execution of operations, including query times, and is instrumental in identifying performance bottlenecks. Here’s how you can set up profiling for the slow query log in MongoDB:

There are 3 logging levels for profiling for sloq query log in MonogDB:

Level 0 : the profiler is off, does not collect any data. This is the default profiler level. type of level is in int
Level 1 : Log profiling data for slow operations only. By default slow operations are those slower than 100 milliseconds. Type of level is in int.
Level 2 : Log profiling data for all database operations. Type of level is in int

Enable Profiling in Configuration File:

Locate your MongoDB configuration file (usually mongod.conf).
Add or modify the operationProfiling section to enable profiling. For example:

operationProfiling:
  mode: slowOp
  slowOpThresholdMs: 100

In this example, mode is set to slowOp to focus on slow operations, and slowOpThresholdMs defines the threshold for an operation to be considered slow (e.g., 100 milliseconds).

Start MongoDB with Profiling:

Restart your MongoDB instance to apply the changes.
View Slow Queries:

You can use the db.currentOp() command in the MongoDB shell to view currently executing operations and identify slow queries.
Additionally, the system.profile collection stores profiling information, and you can query it to retrieve historical data.
Third-Party Monitoring Tools:

Consider using third-party monitoring and management tools for more comprehensive query analysis and performance insights. Tools like MongoDB Atlas, MMS (MongoDB Management Service), or Ops Manager offer advanced features for monitoring and analyzing queries.
Adjust Profiling Configuration as Needed:

Fine-tune the profiling configuration based on your requirements. Adjust the slowOpThresholdMs value to capture queries with a specific execution time threshold.

Remember to carefully manage profiling in production environments, as excessive profiling can impact performance. It’s recommended to adjust settings based on your specific performance monitoring needs.

Enable slow query log in PostgreSQL

To activate slow query logging on AWS RDS PostgreSQL, adjust the settings in a customized parameter group associated with the respective database instance:

Navigate to the AWS RDS console and select “Databases.”
Choose the relevant database instance.
Access the “Configuration” view to examine the database instance details and identify the associated parameter group.
Note: Newly created RDS databases are linked to the default DB parameter group. Since modifications are not allowed in the default group, create a custom parameter group. When transitioning the DB instance parameter group from default to custom, a database instance reboot is necessary.

Select the parameter group you wish to modify.
Modify the value of the following parameter:
log_min_duration_statement = 1000 (where 1,000 signifies one second, as the value is in milliseconds)
Note: A setting of zero logs all statement durations, while minus-one (the default) disables logging statement durations. For instance, setting it to 250ms logs all SQL statements running 250ms or longer.
Ensure to save the changes.

Note: If your database already utilizes a custom parameter group, there’s no need to reboot the instance for these changes to take effect, as these parameters are dynamic.
Verify functionality by executing a few select queries. Return to the console, choose “Databases,” select your instance, and access the “Logs & Events” view. In the “Logs” tab, choose the latest log and click ‘View’ to inspect the log content.

To enable slow query logging in PostgreSQL, you typically need to follow these steps:

Edit PostgreSQL Configuration File:

Open the PostgreSQL configuration file. This file is usually named postgresql.conf.
You can find the configuration file in the PostgreSQL data directory. Common locations include /etc/postgresql/{version}/main/ on Linux or C:\Program Files\PostgreSQL{version}\data on Windows.
Look for the logging_collector parameter and set it to on. This enables the logging collector, which is necessary for capturing logs.
Set Log Statement Duration:

Find the log_min_duration_statement parameter and set its value. This parameter defines the minimum duration in milliseconds for a statement to be logged.
For example, log_min_duration_statement = 1000 logs statements that take one second or longer.
Specify Log Destination:

Ensure that the log_destination parameter is set to include the desired log destination. Common values include stderr for logging to the standard error output or csvlog for logging to CSV files.
Restart PostgreSQL:

After making changes to the configuration file, restart the PostgreSQL service to apply the modifications.
On Linux, you can use a command like sudo service postgresql restart.
On Windows, you may need to restart the PostgreSQL service from the Services application.
Review Logs:

Once the logging is enabled and PostgreSQL is restarted, you can review the logs.
The location of the log files depends on your configuration. Common locations include the PostgreSQL data directory or a specified log directory.


For example, you might find logs in a file like postgresql.log or in separate log files if you’ve specified a log directory. Use a text editor or a log viewer to inspect the contents of these log files.

Remember to tailor the configuration to your specific needs and consider the impact of logging on performance, especially in a production environment. Adjust the log duration and other settings based on your requirements.

Benefits to check logs of slow query logs

Reviewing logs of slow queries in a database system offers various benefits for monitoring, troubleshooting, and optimizing performance. Here are some key advantages:

Performance Optimization:

Identification of Performance Issues: Slow query logs help identify queries that are impacting the overall performance of the database.
Targeted Optimization: Developers and database administrators can focus their optimization efforts on the specific queries that are causing slowdowns.

Resource Utilization:

Resource Consumption: Slow query logs provide insights into the resource consumption of individual queries, aiding in efficient resource allocation.
Prevention of Resource Exhaustion: Identifying resource-intensive queries helps prevent scenarios where the database exhausts available resources.

User Experience Improvement:

Reduced Response Time: By optimizing slow queries, the overall response time of the application can be improved, leading to a better user experience.
Enhanced Responsiveness: Addressing slow queries contributes to reducing latency and ensuring a more responsive application.
Troubleshooting and Issue Resolution:

Issue Identification: Slow query logs are instrumental in identifying issues such as deadlocks, long-running transactions, or poorly performing queries.
Root Cause Analysis: During troubleshooting, logs provide a historical record for root cause analysis of performance-related problems.

Capacity Planning:

Informed Scaling Decisions: Understanding the performance characteristics of queries helps in making informed decisions about scaling resources, be it upgrading hardware or scaling horizontally by adding more servers.

Security Monitoring:

Anomaly Detection: Unexpected slow queries may indicate potential security threats or unauthorized access. Monitoring logs enhances the ability to detect anomalies.
Security Auditing: Logs can be used for security auditing, tracking potentially malicious or suspicious database activity.
Continuous Monitoring and Trend Analysis:

Proactive Performance Management: Regularly checking slow query logs allows for proactive monitoring of database health and performance.
Trend Identification: Over time, monitoring logs helps in identifying trends and patterns, enabling better long-term performance planning.

Feedback Loop for Development:

Optimizing Application Code: Slow query logs provide a feedback loop to developers, helping them optimize application code and database queries for better performance.
Version Upgrades: When upgrading database versions or making changes to the application, reviewing slow query logs can identify any performance regressions.

In conclusion, examining logs of slow queries is crucial for maintaining a well-performing database system, ensuring optimal resource utilization, and providing insights for continuous improvement. It serves as a proactive measure to address issues before they significantly impact the user experience or overall system performance.