Setup worker_processes

The architecture of the Nginx master and worker processes is explained as follows: “nginx has one master process and several worker processes. The main goal of the master process is to read and evaluate the configuration, as well as support worker processes. Worker processes actually handle requests. Nginx uses an event-based model and OS-specific mechanisms to efficiently distribute requests between worker processes.”

In other words, worker_processes tells your server how many cores are assigned so that Nginx can manage requests in an optimized way. Default Nginx configuration path: /etc/nginx/nginx.conf

To find out how many cores you have on your web server, run the following command.

# grep processor /proc/cpuinfo | wc –l
4

Worker_connections tells worker_processes how many clients Nginx can serve at one time. The default value is 768, but it is important to remember that each browser usually opens at least 2 connections to the server. The maximum number for setting worker_processes is 1024, and it is best to use this to get the full potential from Nginx. Based on this, taking into account one core for each worker_processes, setting worker_connections to 1024 means that Nginx can serve 1024 clients per second.

worker_processes  4;
events {
    worker_connections  8096;
    multi_accept        on;
    use                 epoll;
}
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log crit;
http {
    sendfile           on;
    tcp_nopush         on;
    tcp_nodelay        on;
    keepalive_timeout  30;
    access_log        off; 
    tcp_nopush         on;
    tcp_nodelay        on;
    reset_timedout_connection on;
    client_body_timeout 10;
    send_timeout 2;
    keepalive_requests 100000;
    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
}

Gzip compression

Gzip can help reduce the amount of transmission over the network that Nginx works with. However, be careful increasing the level of gzip_comp_level too high as the server will start wasting CPU cycles.
http {
...
    gzip on;
    gzip_disable msie6;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 32 16k;
    gzip_min_length 250;
    gzip_types image/jpeg image/bmp image/svg+xml text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon;

Caching

An easy way to avoid processing your server’s requests is if the clients think their content has already been updated.

To do this, you will need to set headers suitable for the cache, and a simple way to do this is to declare the necessary content and are fixed for a certain period of time:

server {
...
    location / {
       ...
  }
      ...
    location ~* \.(jpg|jpeg|png|gif|ico|xml)$ {
       expires 30d;
    }
    location ~* \.(css|js)$ {
       expires 7d;
    }
...
}

HTTP/2 Support

HTTP/2 has many advantages over HTTP, for example, it allows the browser to download files in parallel and allows the server to load resources, among other things. All you have to do is replace http with http2 in your server block by default

server{
...
listen 443 http2 default_server;
    listen [::]:443 http2 default_server;
    server_name example.com;
...
}

Redirect WWW

server {
...
}
server {
    listen 80;
    listen [::]:80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}
server {
    listen 80;
    listen [::]:80;
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name www.example.com;
    return 301 https://example.com$request_uri;
}

Simple DDoS protection

This is far from secure DDoS protection, but it can slow down some small DDoS.
http {
 . . .
# limit the number of connections per IP
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

# limit the number of requests for this session
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=10r/s;

# the zone we want to limit to the upper values, we want to limit the entire server

server {
    limit_conn conn_limit_per_ip 20;
    limit_req zone=req_limit_per_ip burst=20 nodelay;
}

# if the request body size is larger than the buffer size, then the full (or partial) request body is written to a temporary file
client_body_buffer_size  128k;

# buffer size for reading client request header
client_header_buffer_size 3m;

# maximum number and size of buffers for large headers to read from client request
large_client_header_buffers 4 256k;

# body of the request timeout from the client
client_body_timeout   3m;

# how long to wait for the client to send the request header
client_header_timeout 3m;
...
}

Improve security

By default, your NGINX does not have all the necessary security headers, which is actually quite simple. They prevent click attacks, crossite scripting, and other code attacks.
http {
...
   add_header X-Frame-Options "SAMEORIGIN" always;
   add_header X-XSS-Protection "1; mode=block" always;
   add_header X-Content-Type-Options "nosniff" always;
   add_header Referrer-Policy "no-referrer-when-downgrade" always;
   add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
   add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
...
}