Skip to content

This file type cannot be converted in the browser.

┌─ FILE ANALYSIS ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
DEVELOPER : N/A (Universal)
CATEGORY : System
MIME TYPE : text/plain
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

What is a LOG file?

LOG files are plain text files that record events, errors, and activities from applications, operating systems, and network services in chronological order. They typically contain timestamped entries with severity levels (DEBUG, INFO, WARN, ERROR, FATAL), making them the primary tool for debugging issues, monitoring system health, and auditing security events.

Every major software system produces log files: web servers, databases, operating systems, cloud services, and applications all write logs. Effective log analysis is a core skill in software development, DevOps, and system administration. Log files are also essential for security incident response — they provide the evidence trail of what happened, when, and from where.

How to open LOG files

  • Notepad (Windows) — Simple viewing for small files
  • VS Code (Windows, macOS, Linux) — Handles large files well; syntax highlighting extensions available
  • Console (macOS) — Built-in log viewer with filtering
  • tail -f (Linux/macOS) — tail -f app.log streams new log entries in real-time
  • LogExpert (Windows) — Free specialized log viewer with filtering and column highlighting
  • Glogg / klogg (Windows, macOS, Linux) — Fast viewer for very large log files (GB+)

Technical specifications

PropertyValue
FormatPlain text (usually)
EncodingUTF-8 or ASCII
StructureTimestamped entries (format varies by application)
RotationSize-based or time-based rotation (daily, weekly)
Severity levelsDEBUG, INFO, WARN, ERROR, FATAL (varies)
Common locations/var/log/ (Linux), C:\Windows\Logs\ (Windows), ~/Library/Logs/ (macOS)

Common log formats

Apache / Nginx access log:

192.168.1.1 - - [22/Feb/2026:10:30:45 +0000] "GET /index.html HTTP/1.1" 200 1234

Syslog (RFC 5424):

2026-02-22T10:30:45.123Z hostname appname 1234 - - Application started

JSON log (modern structured logging):

{"timestamp":"2026-02-22T10:30:45Z","level":"ERROR","msg":"DB timeout","duration_ms":5001}

JSON logs are increasingly preferred because they can be queried programmatically without regex parsing.

Common use cases

  • Debugging: Finding root causes of application errors and exceptions
  • Performance monitoring: Identifying slow requests, high memory usage, or resource bottlenecks
  • Security auditing: Detecting unauthorized access attempts, privilege escalation, or data exfiltration
  • Compliance: Regulatory requirements (PCI DSS, HIPAA, SOC 2) mandate audit logs
  • Analytics: Web server access logs feed traffic analysis and SEO tools

Log management and rotation

Without rotation, log files grow indefinitely and fill disk space. logrotate (Linux) automates rotation:

# /etc/logrotate.d/myapp
/var/log/myapp/*.log {
    daily
    rotate 30
    compress
    delaycompress
    missingok
    notifempty
    sharedscripts
}

This keeps 30 days of logs, compresses old files with gzip, and handles missing files gracefully.

Centralized log management

Production systems aggregate logs from multiple servers into centralized platforms:

  • ELK Stack: Elasticsearch + Logstash + Kibana — open source, self-hosted
  • Grafana Loki: Lightweight log aggregation by Grafana Labs
  • Datadog / Splunk: Commercial platforms with powerful search and alerting
  • AWS CloudWatch / GCP Cloud Logging: Cloud-native log services

Centralized logging enables searching across all services simultaneously, setting alerts on error patterns, and retaining logs for compliance periods.

Reading logs effectively

Key techniques for log analysis:

# Follow a live log
tail -f /var/log/nginx/access.log

# Filter for errors
grep "ERROR" app.log

# Count occurrences by status code
awk '{print $9}' access.log | sort | uniq -c | sort -rn

# Find entries in a time range
sed -n '/10:00:00/,/11:00:00/p' app.log

# Last 100 lines
tail -n 100 app.log