Nginx mail server. Setting up NGINX for mail proxying

This article will explain how to configure NGINX Plus or NGINX Open Source as a proxy for a mail server or an external mail service.

Introduction

NGINX can proxy IMAP, POP3 and SMTP protocols to one of the upstream mail servers that host mail accounts and thus can be used as a single endpoint for email clients. This may bring in a number of benefits, such as:

  • easy scaling the number of mail servers
  • choosing a mail server basing on different rules, for example, choosing the nearest server basing on a client’s IP address
  • distributing the load among mail servers
Prerequisites

    NGINX Plus (already includes the Mail modules necessary to proxy email traffic) or NGINX Open Source compiled the Mail modules using the --with-mail parameter for email proxy functionality and --with-mail_ssl_module parameter for SSL/TLS support:

    $ ./configure --with-mail --with-mail_ssl_module --with-openssl=[ DIR] /openssl-1.1.1

    IMAP, POP3 and/or SMTP mail servers or an external mail service

Configuring SMTP/IMAP/POP3 Mail Proxy Servers

In the NGINX configuration file:

mail ( #... )

mail (server_name mail.example.com; #...)

mail (server_name mail.example.com; auth_http localhost: 9000 /cgi-bin/nginxauth.cgi; #...)

Alternatively, specify whether to inform a user about errors from the authentication server by specifying the proxy_pass_error_message directive. This may be handy when a mailbox runs out of memory:

mail ( server_name mail.example.com ; auth_http localhost : 9000 /cgi-bin/nginxauth.cgi ; proxy_pass_error_message on ; #... )

Configure each SMTP, IMAP, or POP3 server with the server blocks. For each server, specify:

  • the port number that correspond to the specified protocol with the listen directive
  • the protocol with the protocol directive (if not specified, will be automatically detected from the port specified in the listen directive)
  • permitted authentication methods with imap_auth , pop3_auth , and smtp_auth directives:

server ( listen 25 ; protocol smtp ; smtp_auth login plain cram-md5 ; ) server ( listen 110 ; protocol pop3 ; pop3_auth plain apop cram-md5 ; ) server ( listen 143 ; protocol imap ; )

Setting up Authentication for a Mail Proxy

Each POP3/IMAP/SMTP request from the client will be first authenticated on an external HTTP authentication server or by an authentication script. Having an authentication server is obligatory for NGINX mail server proxy. The server can be created by yourself in accordance with the NGINX authentication protocol which is based on the HTTP protocol.

If authentication is successful, the authentication server will choose an upstream server and redirect the request. In this case, the response from the server will contain the following lines:

HTTP/1.0 200 OK Auth-Status: OK Auth-Server: # the server name or IP address of the upstream server that will be used for mail processing Auth-Port: # the port of the upstream server

If authentication fails, the authentication server will return an error message. In this case, the response from the server will contain the following lines:

HTTP/1.0 200 OK Auth-Status: # an error message to be returned to the client, for example “Invalid login or password” Auth-Wait: # the number of remaining authentication attempts until the connection is closed

Note that in both cases the response will contain HTTP/1.0 200 OK which might be confusing.

For more examples of requests to and responses from the authentication server, see the ngx_mail_auth_http_module in NGINX Reference documentation.

Setting up SSL/TLS for a Mail Proxy

Using POP3/SMTP/IMAP over SSL/TLS you make sure that data passed between a client and a mail server are secured.

To enable SSL/TLS for the mail proxy:

Make sure your NGINX is configured with SSL/TLS support by typing-in the nginx -V command in the command line and then looking for the with --mail_ssl_module line in the output:

$ nginx -V configure arguments: ... with--mail_ssl_module

Make sure you have obtained server certificates and a private key and put them on the server. A certificate can be obtained from a trusted certificate authority (CA) or generated using an SSL library such as OpenSSL.

ssl on ;

starttls on ;

Add SSL certificates: specify the path to the certificates (which must be in the PEM format) with the ssl_certificate directive, and specify the path to the private key in the ssl_certificate_key directive:

mail ( #... ssl_certificate /etc/ssl/certs/server.crt ; ssl_certificate_key /etc/ssl/certs/server.key ; )

You can use only strong versions and ciphers of SSL/TLS with the ssl_protocols and ssl_ciphers directives, or you can set your own preferable protocols and ciphers:

mail ( #... ssl_protocols TLSv1 TLSv1.1 TLSv1.2 ; ssl_ciphers HIGH:!aNULL:!MD5 ; )

Optimizing SSL/TLS for Mail Proxy

These hints will help you make your NGINX mail proxy faster and more secure:

Set the number of worker processes equal to the number of processors with the worker_processes directive set on the same level as the mail context:

worker_processes auto ; mail ( #... )

Enable the shared session cache and disable the built-in session cache with the auto ; mail ( server_name mail.example.com ; auth_http localhost : 9000 /cgi-bin/nginxauth.cgi ; proxy_pass_error_message on ; ssl on ; ssl_certificate /etc/ssl/certs/server.crt ; ssl_certificate_key /etc/ssl/certs/server. key ; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 ; ssl_ciphers HIGH:!aNULL:!MD5 ; ssl_session_cache shared:SSL:10m ; ssl_session_timeout 10m ; server ( listen 25 ; protocol smtp ; smtp_auth login plain cram-md5 ; ) server ( listen 1 10 ; protocol pop3 ; pop3_auth plain apop cram-md5 ; ) server ( listen 143 ; protocol imap ; ) )

In this example, there are three email proxy servers: SMTP, POP3 and IMAP. Each of the servers is configured with SSL and STARTTLS support. SSL session parameters will be cached.

The proxy server uses the HTTP authentication server – its configuration is beyond the scope of this article. All error messages from the server will be returned to clients.

iRedMail is ready assembly mail server with open source code. The assembly is based on the Postfix SMTP server (Mail Transfer Agent, abbreviated as MTA). The assembly also includes: Dovecot, SpamAssassin, Greylist, ClamAV, SOGo Roundcube, NetData and NGINX.

Dovecot - IMAP/POP3 server.

Spamassassin is a spam filtering tool.

Greylist is a greylist-based anti-spam tool.

ClamAV is an antivirus.

Roundcube and SOGo are web clients for working with email.

NetData is a real-time server monitoring program.

Nginx is a web server.

Supports operating systems: CentOS 7, Debian 9, Ubuntu 16.04/18.04, FreeBSD 11/12 and OpenBSD 6.4.

iRedMail has paid and free versions, which differ from each other in the functionality of their own web interface of the iRedAdmin mail assembly. IN free version You can only create domains, user and administrator mailboxes. If you need to create an alias, you will no longer be able to do this in the free version via iRedAdmin. Fortunately, there is a free solution called PostfixAdmin that allows you to do this. PostfixAdmin integrates easily into iRedMail and works great with it.

Installation

To install, we will need one of the operating systems listed above. I will be using Ubuntu Server 18.04. You must also have purchased Domain name and configured DNS zone. If you are using DNS servers your domain registrar, then you need to make two entries in the domain zone management section: A and MX. You can also use your own DNS by setting up delegation in personal account your domain name registrar.

Setting up a domain zone when using a DNS registrar

Note! Entry time DNS settings lasting from several hours to one week. Until the settings take effect, the mail server will not work correctly.

To install, download from the iRedMail website current version. Currently it is 0.9.9.

# wget https://bitbucket.org/zhb/iredmail/downloads/iRedMail-0.9.9.tar.bz2

Then unpack the downloaded archive.

# tar xjf iRedMail-0.9.9.tar.bz2

Unpacking the archive

And go to the created folder.

# cd iRedMail-0.9.9

iRedMail installer folder

Checking the contents of the folder

Folder Contents

And run the iRedMail installation script.

# bash iRedMail.sh

The installation of the mail system will start. During the installation process you will need to answer a number of questions. We agree to begin the installation.

Start installation

Selecting the installation directory

Now you need to select a web server. There is not much choice, so we choose NGINX.

Selecting a Web Server

Now you need to select a database server that will be installed and configured to work with the mail system. Select MariaDB.

Selecting a database server

Set the root password for the database.

Creating a database root password

Now we indicate our email domain.

Creating a mail domain

Then we create a password for the administrator’s mailbox [email protected].

Creating a mail administrator password

Selecting Web Components

Confirm the specified settings.

Confirming settings

The installation has started.

Installation

Once the installation is complete, confirm the creation of the iptables rule for SSH and restart the firewall. iRedMail works with iptables. In Ubuntu, the most commonly used firewall management utility is UFW. If for one reason or another you have such a need, then install UFW (apt install ufw) and add rules so that UFW (example: ufw allow "Nginx Full" or ufw allow Postfix) does not block the operation of the mail server. You can view the list of available rules by running the command: ufw app list . Then enable UFW: ufw enable.

Creating an iptables rule

Restarting the firewall

This completes the installation of iRedMail. The system provided us with web interface addresses and login credentials. To enable all mail system components, you must reboot the server.

Completing installation

Let's reboot.

# reboot

Settings

First you need to make sure that everything works. Let's try to log into the iReadAdmin control panel at https://domain/iredadmin. Login [email protected], password created during installation. There is a Russian-language interface.

As you can see, everything works. When logging into iRedAdmin, you most likely received a security error related to the certificate. This happens because iRedMail has a built-in self-signed certificate, which the browser complains about. To resolve this problem, you must install a valid SSL certificate. If you have a purchased one, you can install it. In the example, I will install free SSL from Let's Encrypt.

Installing a Let's Encrypt SSL certificate

We will install the certificate using the certbot utility. First, let's add a repository.

# add-apt-repository ppa:certbot/certbot

Then we will install certboot itself with the necessary components.

# apt install python-certbot-nginx

We receive a certificate.

# certbot --nginx -d domain.ru

After running the command, the system will ask you to enter your email address, enter. Afterwards you will most likely receive an error that it is not possible to find the server block for which the certificate was generated. In this case, this is normal, since we don’t have any server block. The main thing for us is to get a certificate.

Obtaining a certificate

As we can see, the certificate was successfully received and the system showed us the paths to the certificate itself and to the key. They are exactly what we need. In general, we received 4 files that will be stored in the "/etc/letsencrypt/live/domain" folder. Now we need to inform the web server about our certificate, that is, replace the embedded certificate with the one we just received. To do this, we need to edit just one file.

# nano /etc/nginx/templates/ssl.tmpl

And we change the last two lines in it.

Replacing the SSL certificate

We change the paths in the file to the paths that the system told us when receiving the certificate.

Replacing an SSL certificate

And restart NGINX.

# service nginx restart

Now let's try to log into iRedAdmin again.

Verifying SSL certificate

There is no more certificate error. The certificate is valid. You can click on the lock and see its properties. When the certificate expires, certboot should renew it automatically.

Now we will report about the Dovecot and Postfix certificate. To do this, we will edit two configuration files. We do:

# nano /etc/dovecot/dovecot.conf

Finding the block:

#SSL: Global settings.

And we change the certificate registered there to ours.

Replacement certificate for Dovecot

Also pay attention to the line "ssl_protocols". Its value must be "!SSLv3", otherwise you will receive the error "Warning: SSLv2 not supported by OpenSSL. Please consider removing it from ssl_protocols" when restarting Dovecot.

# nano /etc/postfix/main.cf

Finding the block:

# SSL key, certificate, CA

And we change the paths in it on the path to the files of our certificate.

Replacing a certificate for Postfix

This completes the installation of the certificate. It is necessary to restart Dovecot and Postfix, but it is better to reboot the server.

# service dovecot restart

# reboot

Installing PHPMyAdmin

This step is optional, but I recommend doing it and installing PHPMyAdmin for ease of working with databases.

# apt install phpmyadmin

The installer will ask which web server to configure PHPMyAdmin to work with, since NGINX is not on this list, just press TAB and move on.

Installing PHPMyAdmin

After the installation is complete, in order for phpmyadmin to work, you need to make a symlink to the directory with which NGINX works by default.

# ln -s /usr/share/phpmyadmin /var/www/html

And we try to go to https://domain/phpmyadmin/

PHPMyAdmin is running. The connection is protected by a certificate, there are no errors. Go ahead. Let's create a MySQL database administrator (MariaDB).

# mysql

And we get to the MariaDB management console. Next, we run the commands one by one:

MariaDB > CREATE USER "admin"@"localhost" IDENTIFIED BY "password";
MariaDB > GRANT ALL PRIVILEGES ON *.* TO "admin"@"localhost" WITH GRANT OPTION;
MariaDB > FLUSH PRIVILEGES;

Creating a MySQL User

Everything is OK, login is complete. PHPMyAdmin is ready to go.

Installing PostfixAdmin

In principle, PostfixAdmin, like PHPMyAdmin, does not need to be installed. The mail server will work fine without these components. But then you will not be able to create mail aliases. If you don't need this, then you can safely skip these sections. If you still need aliases, then you have two options: purchasing the paid version of iReaAdmin or installing PostfixAdmin. Of course, you can do this without additional software, by registering aliases in the database manually, but this is not always convenient and is not suitable for everyone. I recommend using PostfixAdmin; we’ll look at its installation and integration with iRedMail now. Let's start the installation:

# apt install postfixadmin

We agree and create a password for the program’s system database.

Installing PostfixAdmin

Installing PostfixAdmin

We create a symlink in the same way as installing PHPMyAdmin.

# ln -s /usr/share/postfixadmin /var/www/html

We make the user on whose behalf the web server is launched the owner of the directory. In our case, NGINX is launched as the www-data user.

# chown -R www-data /usr/share/postfixadmin

Now we need to edit the PostfixAdmin configuration file and add information about the database that iRedAdmin uses. By default this database is called vmail. If you go to PHPMyAdmin you can see it there. And so, in order for PostfixAdmin to be able to make changes to the database, we register it in the PostfixAdmin configuration.

# nano /etc/postfixadmin/config.inc.php

We find the lines:

$CONF["database_type"] = $dbtype;
$CONF["database_host"] = $dbserver;
$CONF["database_user"] = $dbuser;
$CONF["database_password"] = $dbpass;
$CONF["database_name"] = $dbname;

And let's bring it to mind:

$CONF["database_type"] = "mysqli"; # Database type
$CONF["database_host"] = "localhost"; # Database server host
$CONF["database_user"] = "admin"; # Login with rights to write to the vmail database. You can use the previously created admin
$CONF["database_password"] = "password"; # Password for the user specified above
$CONF["database_name"] = "vmail"; # Database name iRedMail

Entering information about the database

If you plan to use the SOGo web mail client, then you need to do one more additional step, namely change the PostfixAdmin encryption in the $CONF["encrypt"] item from "md5crypt" to "dovecot:SHA512-CRYPT" . If you do not do this, then when you try to log in to SOGo as a user created in PostfixAdmin, you will receive an error: incorrect login or password.

Changing the encryption type

Now, in order to successfully complete the installation and not receive errors, you must execute a query to the database. It is convenient to do this through PHPMyAdmin. Select the vmail database and go to the SQL tab. In the window we enter:

DROP INDEX domain on mailbox;
DROP INDEX domain on alias;
ALTER TABLE alias ADD COLUMN `goto` text NOT NULL;

Database Query

And click "Forward". Now we are all ready, we can go to the PostfixAdmin web interface and complete the installation. To do this, you need to type in your browser: https://domain/postfixadmin/setup.php.

The following should appear:

Installing PostfixAdmin

If everything is done according to the instructions, then there should be no errors. If there are any, then they must be eliminated, otherwise the system will not allow you to continue. Set the installation password and click "Generate password hash". The system will generate a password hash, which must be inserted into the $CONF["setup_password"] parameter.

Completing the installation of PostfixAdmin

Changing configuration file settings

Now enter the newly created password and create the PostfixAdmin administrator. It is better not to create an administrator with the postmaster login, as there may be problems logging into the iRedAdmin administration panel.

Creating a PostfixAdmin Administrator

That's it, the administrator has been created. You can sign in.

Please note that from a security point of view, it is better to rename or delete the setup.php file in the postfixadmin directory.

Go to: https://domain/postfixadmin/ and enter the newly created credentials. In PostfixAdmin, as well as in iRedAdmin, the Russian language is available. You can select it during authorization.

We are trying to create a user mailbox.

Enabling/Disabling iRedMail modules

iRedAPD is responsible for managing iRedMail modules. It has a configuration file in which working modules are registered. If you don't need a particular module, you can remove it from the configuration file and it will stop working. We do:

# nano /opt/iredapd/settings.py

Find the line “plugins” and remove components you don’t need from it. I'll remove the "greylisting" component. Of course, it protects against spam quite effectively, but the necessary letters often do not arrive.

Greylist is an automatic spam protection technology based on analysis of the behavior of the mail sender's server. When "greylisting" is enabled, the server refuses to accept a letter from an unknown address for the first time, reporting a temporary error. In this case, the sending server must repeat the sending later. Spammer programs usually don't do this. If the letter is sent again, it is added to the list for 30 days and the exchange of mail occurs the first time. Decide for yourself whether to use this module or not.

Enabling/Disabling mail modules

After making changes, you must restart iRedAPD.

# service iredapd restart

Testing the mail server

This completes the configuration of the iRedMail mail server. You can proceed to the final stage - testing. Let's create two mailboxes. To check one through iRedAdmin, the second through PostfixAdmin and send a letter from one mailbox to another and vice versa. In iRedAdmin we will create a mailbox [email protected]. In PostfixAdmin - [email protected]

Creating a user in iRedAdmin

Creating a user in PostfixAdmin

We check that users have been created.

If you pay attention to the “To” column in the list of PostfixAdmin mailboxes, you will notice the difference between mailboxes created in iRedAdmin and PostfixAdmin. Mailboxes created in iRedAdmin are marked as "Forward only", and those created in PostfixAdmin are marked as "Mailbox". At first I couldn’t understand for a long time why this was happening and what the difference was between them, and finally I noticed one thing. Mailboxes in iRedAdmin are created without aliases, and mailboxes in PostfixAdmin are created with an alias to itself.

And if these aliases are deleted, then the mailboxes will be displayed as those created in iRedAdmin "Forward only".

Removing aliases

Aliases have been removed. Checking PostfixAdmin.

As you can see, all the boxes have become “Forward only”. In the same way, if you create an alias for yourself in a mailbox created in iRedAdmin, it will become “Mailbox”. In principle, this does not affect the performance of mail in any way. The only thing is that you will not be able to create an alias on a mailbox created in PostfixAdmin. Instead of creating an alias, you will need to edit an existing one. Speaking of aliases, in new version iRedMail needs to make a change to one of the Postfix maps that handles aliases. And if you don't do this, the created aliases will not work. To do this, you need to correct the following in the /etc/postfix/mysql/virtual_alias_maps.cf file:

We do:

# nano /etc/postfix/mysql/virtual_alias_maps.cf

And we fix it.

Setting up aliases

Restart Postfix:

# service postfix restart

After this everything should work.

And so, let's start checking mail. We will log into the user1 mailbox via Roundcube, and into the user2 mailbox via SOGo and send a letter from the user1 mailbox to user2 and back.

Sending an email with Roundcube

Receiving a letter in SOGo

Sending an email to SOGo

Receiving a letter in Roundcube

Everything works without any problems. Delivery of the letter takes from two to five seconds. In the same way, letters are perfectly delivered to the Yandex and mail.ru servers (tested).

Now let's check the aliases. Let's create a user3 mailbox and make an alias from the user1 mailbox to the user2 mailbox. And we will send a letter from the user3 mailbox to the user1 mailbox. In this case, the letter should arrive at user2's mailbox.

Creating an alias

Sending a letter from user3 mailbox to user1 mailbox

Receiving a letter on user2's mailbox

The work of aliases is also fine.

Let's test the operation of the mail server via local mail client. For an example, consider Mozilla Thunderbird. Let's create two more users: client1 and client2. We will connect one mailbox via IMAP, the other via POP3 and send a letter from one mailbox to the other.

IMAP connection

Connection via POP3

We send a letter from Client 1 to Client 2.

Sending from Client 1

Receipt on Client 2

And in reverse order.

Sending from Client 2

Receipt on Client 1

Everything is working.

If you go to the address: https://domain/netdata, you can see graphs of the system state.

Conclusion

This completes the installation, configuration and testing of the iRedMail mail system. As a result, we received a completely free, full-fledged mail server with a valid SSL certificate, two different web mail clients, two control panels, as well as antispam and antivirus built into mail. If you wish, instead of web mail clients, you can use local mail clients such as Microsoft Outlook or Mozilla Thunderbird. If you don’t plan to use web mail clients, you can not install them at all, so as not to overload the server, or install one thing that you like best. I personally like SOGo better because its interface is optimized for mobile devices, making it very convenient to view email from a smartphone. The same goes for NetData and iRedAdmin, if you don’t plan to use it, it’s better not to install it. This mail system is not very demanding on resources. All this works on a VPS server with 1024 MB random access memory and one virtual processor. If you have any questions about this mail system, write in the comments.

P.S. When testing this product on various operating systems with 1 GB of RAM (Ubuntu, Debian, CentOS), it turned out that 1 GB is not enough for ClamAV to work. In almost all cases, when using 1 GB of memory, the antivirus cited an error related to the database. At the same time, on the Debian and Ubuntu operating systems, the antivirus simply did not scan mail passing through the server, otherwise everything worked fine. On CentOS the situation was slightly different. The clamd service completely crashed the system, thereby making normal server operation impossible. When trying to log into the web interfaces, NGINX periodically produced 502 and 504 errors. Mail was also sent every other time. Moreover, if we add up to 2 GB of RAM, then in all cases there were no problems with the operation of the antivirus and the server as a whole. ClamAV scanned mail passing through the mail server, which it wrote about in the logs. When trying to send a virus as an attachment, the delivery was blocked. Memory consumption was approximately 1.2 - 1.7 GB.

Nginx is a small, very fast, fairly functional web server and mail proxy server, developed by Igor Sysoev (rambler.ru). Due to the very low consumption of system resources and operating speed, as well as configuration flexibility, web Nginx server often used as a frontend to more heavyweight servers, such as Apache, in high load projects. The classic option is the combination, Nginx - Apache - FastCGI. Working in such a scheme, Nginx server, accepts all requests coming via HTTP, and depending on the configuration and the request itself, decides whether to process the request itself and give the client a ready response or send the request for processing to one of the backends ( Apache or FastCGI).

As you know, the Apache server processes each request in a separate process (thread), which, it must be said, consumes quite a small amount of system resources, if there are 10-20 such processes, it’s nonsense, and if there are 100-500 or more, the system becomes not fun.

Let's try to imagine a similar situation. Suppose on Apache comes 300 HTTP requests from clients, 150 clients are on fast leased lines, and the other 150 are on relatively slow Internet channels, even if not on modems. What is happening in this situation? And the following happens: the Apache web server, in order to process these 300 connections, creates a process (thread) for each, it will generate content quickly, and 150 fast clients will immediately take the result of their requests, the processes serving them will be killed and resources will be released , and 150 are slow, and will receive the results of their requests slowly, due to the narrow Internet channel, as a result of which 150 processes will hang in the system Apache, waiting for clients to pick up the content generated by the web server, devouring a lot of system resources. Naturally, the situation is hypothetical, but I think the essence is clear. The bundle helps correct the situation described above. After reading the entire request from the client, he submits it for processing Apache, which in turn generates content and returns the ready response to Nginx as quickly as possible, after which it can kill the process with a clear conscience and free up the system resources it occupies. Nginx web server, receiving the request result from Apache, writes it to a buffer or even to a file on disk and can give it to slow clients for as long as desired, while its working processes consume so few resources that .. “it’s even funny to talk about it” ©. :) This scheme significantly saves system resources, I repeat, but Nginx worker processes consume a tiny amount of resources, this is especially true for large projects.

And this is only a small part of what the Nginx server can do; do not forget about the capabilities of data caching and working with memcached. I will give a list of the main functionality Nginx web server.

Nginx server functionality as an HTTP server
  • Treatment static content, index files, directory listings, open file descriptor cache;
  • Accelerated proxying with caching, load distribution and fault tolerance;
  • Accelerated support FastCGI servers with caching, load distribution and fault tolerance;
  • Modular structure, support for various filters (SSI, XSLT, GZIP, resuming, chunked responses);
  • Support for SSL and TLS SNI extensions;
  • IP-based or Name-based virtual servers;
  • Working with KeepAlive and pipelined connections;
  • Ability to configure any timeouts as well as the number and size of buffers, at the level Apache server;
  • Performing various actions depending on the client’s address;
  • Changing URIs using regular expressions;
  • Special error pages for 4xx and 5xx;
  • Restricting access based on client address or password;
  • Setting up log file formats, rotating logs;
  • Limiting the speed of response to the client;
  • Limiting the number of simultaneous connections and requests;
  • Supports PUT, DELETE, MKCOL, COPY and MOVE methods;
  • Changing settings and updating the server without stopping work;
  • Built-in Perl;
Nginx server functionality as a mail proxy server
  • Forwarding to IMAP/POP3 backend using an external HTTP authentication server;
  • Checking user SMTP on external HTTP server authentication and forwarding to an internal SMTP server;
  • Supports the following authentication methods:
    • POP3 - USER/PASS, APOP, AUTH LOGIN/PLAIN/CRAM-MD5;
    • IMAP - LOGIN, AUTH LOGIN/PLAIN/CRAM-MD5;
    • SMTP - AUTH LOGI/ PLAIN/CRAM-MD5;
  • SSL support;
  • STARTTLS and STLS support;
Operating systems and platforms supported by the Nginx web server
  • FreeBSD, from 3 to 8 - platforms, i386 and amd64;
  • Linux, from 2.2 to 2.6 - i386 platform; Linux 2.6 - amd64;
  • Solaris 9 - i386 and sun4u platforms; Solaris 10 - i386, amd64 and sun4v platforms;
  • MacOS X platforms ppc, i386;
  • Windows XP, Windows Server 2003; (currently in beta testing)
Nginx server architecture and scalability
  • The main (master) process, several (configured in the configuration file) worker processes running under an unprivileged user;
  • Support for the following connection processing methods:
    • select is a standard method. The corresponding Nginx module is built automatically if no more efficient method is found on a given platform. You can force the build of a given module to be enabled or disabled using the --with-select_module or --without-select_module configuration options.
    • poll is the standard method. The corresponding Nginx module is built automatically if no more efficient method is found on a given platform. You can force the build of a given module to be enabled or disabled using the --with-poll_module or --without-poll_module configuration options.
    • kqueue - effective method, used on FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X operating systems. When used on dual-processor machines running MacOS X, it can cause a kernel panic.
    • epoll is an efficient method used in Linux 2.6+. Some distributions, such as SuSE 8.2, have patches to support epoll in the 2.4 kernel.
    • rtsig - real time signals, an efficient method used in Linux 2.2.19+. By default, there cannot be more than 1024 signals in the queue for the entire system. This is not enough for servers with high load; the queue size needs to be increased using the /proc/sys/kernel/rtsig-max kernel parameter. However, as of Linux 2.6.6-mm2, this option is no longer available, instead each process has a separate signal queue, the size of which is determined using RLIMIT_SIGPENDING.
    • When the queue is full, nginx server resets it and processes connections using the poll method until the situation returns to normal.
    • /dev/poll is an effective method, supported on Solaris 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A+ operating systems.
    • eventport - event ports, an effective method used in Solaris 10. Before using, you need to install a patch to avoid kernel panic.
  • Using kqueue method capabilities such as EV_CLEAR, EV_DISABLE (to temporarily disable an event), NOTE_LOWAT, EV_EOF, number of available data, error codes;
  • Works with sendfile (FreeBSD 3.1+, Linux 2.2.+, Mac OS X 10.5+), sendfile64 (Linux 2.4.21+) and sendfilev (Solaris 8 7/01+);
  • Support for accept filters (FreeBSD 4.1+) and TCP_DEFER_ACCEPT (Linux 2.4+);
  • 10,000 inactive HTTP keep-alive connections consume approximately 2.5M of memory;
  • Minimum number of data copy operations;

NGINX can be used not only as a web server or http-proxy, but also for proxying mail via SMTP, IMAP, POP3 protocols. This will allow you to configure:

  • A single entry point for a scalable email system.
  • Load balancing between all mail servers.

In this article, installation is performed in the operating room. Linux system. As a mail service to which requests are sent, you can use postfix, exim, dovecot, exchange, iredmail assembly and more.

Principle of operation

NGINX accepts requests and authenticates to the web server. Depending on the result of the login and password verification, the proxy will return a response with several headers.

In case of success:

Thus, we determine the server and port of the mail server based on authentication. This provides many opportunities with appropriate knowledge of programming languages.

In case of failure:

Depending on the authentication result and the header, the client is redirected to the mail server we need.

Preparing the server

Let's make some changes to the server security settings.

SELinux

Disable SELinux if we use CentOS or if we use this system security on Ubuntu:

vi /etc/selinux/config

SELINUX=disabled

Firewall

If we use firewalld (default on CentOS):

firewall-cmd --permanent --add-port=25/tcp --add-port=110/tcp --add-port=143/tcp

firewall-cmd --reload

If we use iptables (default in Ubuntu):

iptables -A INPUT -p tcp --dport 25 -j ACCEPT

iptables -A INPUT -p tcp --dport 110 -j ACCEPT

iptables -A INPUT -p tcp --dport 143 -j ACCEPT

apt-get install iptables-persistent

iptables-save > /etc/iptables/rules.v4

* in this example we allowed SMTP (25), POP3 (110), IMAP (143).

Installing NGINX

Depending on the operating system NGINX installation is a little different.

or Linux Centos:

yum install nginx

or Linux Ubuntu:

apt install nginx

We allow autostart of the service and start it:

systemctl enable nginx

systemctl start nginx

If NGINX is already installed on the system, check which modules it works with:

We will receive a list of options with which the web server is built - among them we should see --with-mail . If the required module is not there, you need to update nginx

Setting up NGINX

Open the nginx configuration file and add the mail option:

vi /etc/nginx/nginx.conf

mail (

auth_http localhost:80/auth.php;

Server (
listen 25;
protocol smtp;
smtp_auth login plain cram-md5;
}

Server (
listen 110;
protocol pop3;

}

Server (
listen 143;
protocol imap;
}
}

* Where:

  • server_name is the name of the mail server that will be displayed in the SMTP greeting.
  • auth_http - web server and URL for authentication request.
  • proxy_pass_error_message - allows or denies displaying a message when authentication fails.
  • listen - port on which requests are listened to.
  • protocol - the application protocol for which the corresponding port is listening.
  • smtp_auth - available authentication methods for SMTP.
  • pop3_auth - available authentication methods for POP3.

In the http - server section add:

Server (
listen 80 default_server;
listen [::]:80 default_server;
...

Location ~ \.php$ (
set $root_path /usr/share/nginx/html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $root_path$fastcgi_script_name;
include fastcgi_params;
fastcgi_param DOCUMENT_ROOT $root_path;
}
...

Restart the nginx server:

systemctl restart nginx

Installing and configuring PHP

To perform authentication using PHP, you need to install the following packages on your system.

If CentOS:

yum install php php-fpm

If Ubuntu:

apt-get install php php-fpm

Launch PHP-FPM:

systemctl enable php-fpm

systemctl start php-fpm

Authentication

Login and password verification is performed by a script, the path to which is specified by the auth_http option. In our example, this is a PHP script.

An example of an official template for a login and password verification script:

vi /usr/share/nginx/html/auth.php

* this script accepts any login and password and redirects requests to servers 192.168.1.22 and 192.168.1.33. To set the authentication algorithm, edit lines 61 - 64. Lines 73 - 77 are responsible for returning the servers to which the redirection is made - in this example, if the login begins with the characters “a”, “c”, “f”, “g”, then the redirection will be to the mailhost01 server, otherwise, to mailhost02. The mapping of server names to IP addresses can be set on lines 31, 32, otherwise the call will be made using the domain name.

Setting up a mail server

Data exchange between the NGINX proxy and the mail server occurs in open form. It is necessary to add the possibility of authentication using the PLAIN mechanism to the exception. For example, to configure dovecot, do the following:

vi /etc/dovecot/conf.d/10-auth.conf

Add the lines:

remote 192.168.1.11 (
disable_plaintext_auth = no
}

* in this example, we allowed PLAIN authentication requests from server 192.168.1.11.

We also check:

* if ssl is set to required , the check will not work, since it turns out that on the one hand the server allows requests in clear text, but requires ssl encryption.

Restart Dovecot service:

systemctl restart dovecot

Client setup

You can proceed to checking our proxy settings. To do this, in the client settings, specify the address or name of the nginx server as IMAP/POP2/SMTP, for example:

* in this example, the mail client is configured to connect to server 192.168.1.11 via open ports 143 (IMAP) and 25 (SMTP).

Encryption

Now let's set up an SSL connection. Nginx must be built with the mail_ssl_module module - check with the command:

If the required module is missing, we rebuild nginx.

Then we edit our configuration file:

vi /etc/nginx/nginx.conf

mail (
server_name mail.domain.local;
auth_http localhost/auth.php;

Proxy_pass_error_message on;

Ssl on;
ssl_certificate /etc/ssl/nginx/public.crt;
ssl_certificate_key /etc/ssl/nginx/private.key;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

Server (
listen 110;
protocol pop3;
pop3_auth plain apop cram-md5;
}

Server (
listen 143;
protocol imap;
}

Reason: The SELinux security system is triggered.

Solution: disable or configure SELinux.

Nginx is rapidly gaining popularity, turning from just a static delivery accelerator for Apache into a fully functional and developed web server, which is increasingly used in isolation. In this article we will talk about interesting and non-standard scenarios for using nginx that will allow you to get the most out of your web server.

Mail proxy

Let's start with the most obvious - with nginx's ability to act as a mail proxy. This function is present in nginx initially, but for some reason it is used in production extremely rarely; some people are not even aware of its existence. Be that as it may, nginx supports proxying POP3, IMAP and SMTP protocols with various authentication methods, including SSL and StartTLS, and does it very quickly.

Why is this necessary? There are at least two uses for this functionality. First: use nginx as a shield against annoying spammers trying to send junk emails through our SMTP server. Typically, spammers do not create many problems, since they are quickly rejected at the authentication stage, however, when there are really a lot of them, nginx will help save processor resources. Second: use nginx to redirect users to multiple POP3/IMAP mail servers. Of course, another mail proxy could handle this, but why fence off servers if nginx is already installed on the front end to serve static content via HTTP, for example?

The mail proxy server in nginx is not quite standard. It uses an additional layer of authentication implemented using HTTP, and only if the user passes this barrier is he allowed to proceed further. This functionality is provided by creating a page/script to which nginx sends user data, and she/he returns a response in the form of standard OK or a reason for refusal (such as “Invalid login or password”). The script runs with the following headers:

Authentication script input data HTTP_AUTH_USER: user HTTP_AUTH_PASS: password HTTP_AUTH_PROTOCOL: mail protocol (IMAP, POP3 or SMTP)

And it returns the following:

Authentication script output HTTP_AUTH_STATUS: OK or failure reason HTTP_AUTH_SERVER: real mail server to redirect HTTP_AUTH_PORT: server port

A remarkable feature of this approach is that it can be used not at all for authentication itself, but to scatter users across different internal servers, depending on the user name, data on the current load on mail servers, or even by organizing simple load balancing using round-robin . However, if you just need to transfer users to an internal mail server, you can use a stub implemented by nginx itself instead of a real script. For example, the simplest SMTP and IMAP proxy in the nginx config will look like this:

# vi /etc/nginx/nginx.conf mail ( # Authentication script address auth_http localhost:8080/auth; # Disable the XCLIENT command, some mail servers do not understand it xclient off; # IMAP server server ( listen 143; protocol imap; proxy on; ) # SMTP server server ( listen 25; protocol smtp; proxy on; ) )

# vi /etc/nginx/nginx.conf http ( # Mapping to the desired mail server port depending on the port sent in the HTTP_AUTH_PROTOCOL header map $http_auth_protocol $mailport ( default 25; smtp 25; imap 143; ) # Implementation of the authentication “script” - always returns OK and transfers the user to the internal mail server, setting the desired port using the above mapping server ( listen 8080; location /auth ( add_header "Auth-Status" "OK"; add_header "Auth-Server" "192.168.0.1" ; add_header "Auth-Port" $mailport; return 200; ) ) )

This is all. This configuration allows you to transparently redirect users to the internal mail server, without creating overhead in the form of a script that is unnecessary in this case. By using a script, this configuration can be significantly expanded: configure load balancing, check users using the LDAP database, and perform other operations. Writing the script is beyond the scope of this article, but it is very easy to implement even with only a passing knowledge of PHP and Python.

Video streaming

It's easy to set up regular video hosting based on nginx. All you need to do is upload the transcoded video to a directory accessible to the server, register it in the config and configure the Flash or HTML5 player so that it takes the video from this directory. However, if you need to set up continuous video broadcasting from some external source or a webcam, this scheme will not work, and you will have to look towards special streaming protocols.

There are several protocols that solve this problem, the most effective and supported of them is RTMP. The only trouble is that almost all RTMP server implementations suffer from problems. Official Adobe Flash Media Server is paid. Red5 and Wowza are written in Java, and therefore do not provide required performance, Another implementation, Erlyvideo, is written in Erlang, which is good for a cluster setup, but not as effective for a single server.

I suggest a different approach - use the RTMP module for nginx. It has excellent performance and will also allow you to use one server to serve both the site’s web interface and the video stream. The only problem is that this module is unofficial, so you will have to build nginx with its support yourself. Fortunately, the assembly is carried out in a standard way:

$ sudo apt-get remove nginx $ cd /tmp $ wget http://bit.ly/VyK0lU -O nginx-rtmp.zip $ unzip nginx-rtmp.zip $ wget http://nginx.org/download/nginx- 1.2.6.tar.gz $ tar -xzf nginx-1.2.6.tar.gz $ cd nginx-1.2.6 $ ./configure --add-module=/tmp/nginx-rtmp-module-master $ make $ sudo make install

Now the module needs to be configured. This is done, as usual, through the nginx config:

Rtmp ( # Activate the broadcast server on port 1935 at the address site/rtmp server ( listen 1935; application rtmp ( live on; ) ) )

The RTMP module cannot work in a multi-threaded configuration, so the number of nginx worker processes will have to be reduced to one (later I will tell you how to get around this problem):

Worker_processes 1;

Now you can save the file and force nginx to re-read the configuration. The nginx setup is complete, but we don’t have the video stream itself yet, so we need to get it somewhere. For example, let this be the video.avi file from the current directory. To turn it into a stream and wrap it in our RTMP broadcaster, we’ll use good old FFmpeg:

# ffmpeg -re -i ~/video.avi -c copy -f flv rtmp://localhost/rtmp/stream

If the video file is not in H264 format, it must be re-encoded. This can be done on the fly using the same FFmpeg:

# ffmpeg -re -i ~/video.avi -c:v libx264 -c:a libfaac -ar 44100 -ac 2 -f flv rtmp://localhost/rtmp/stream

The stream can also be captured directly from a webcam:

# ffmpeg -f video4linux2 -i /dev/video0 -c:v libx264 -an -f flv rtmp://localhost/rtmp/stream

To view the stream on the client side, you can use any player that supports RTMP, for example mplayer:

$ mplayer rmtp://example.com/rtmp/stream

Or embed the player directly into a web page, which is served by the same nginx (example from the official documentation):

The simplest RTMP web player

jwplayer("container").setup(( modes: [( type: "flash", src: "/jwplayer/player.swf", config: ( bufferlength: 1, file: "stream", streamer: "rtmp:/ /localhost/rtmp", provider: "rtmp", ) )] ));

There are only two important lines here: “file: “stream””, indicating the RTMP stream, and “streamer: “rtmp://localhost/rtmp””, which indicates the address of the RTMP streamer. For most tasks, such settings will be quite sufficient. You can send several different streams to one address, and nginx will effectively multiplex them between clients. But this is not all that the RTMP module is capable of. With its help, for example, you can organize a relay of a video stream from another server. The FFmpeg server is not needed for this at all, just add the following lines to the config:

# vi /etc/nginx/nginx.conf application rtmp ( live on; pull rtmp://rtmp.example.com; )

If you need to create multiple streams of different quality, you can call the FFmpeg transcoder directly from nginx:

# vi /etc/nginx/nginx.conf application rtmp ( live on; exec ffmpeg -i rtmp://localhost/rtmp/$name -c:v flv -c:a -s 320x240 -f flv rtmp://localhost /rtmp-320x240/$name; ) application rtmp-320x240 ( live on; )

With this configuration, we will get two broadcasters at once, one of which will be available at the address rtmp://site/rtmp, and the second, broadcasting in 320 x 240 quality, at the address rtmp://site/rtmp–320x240. Next, you can add a flash player and quality selection buttons to the site, which will give the player one or another broadcaster address.

And finally, an example of broadcasting music to the network:

While true; do ffmpeg -re -i "`find /var/music -type f -name "*.mp3"|sort -R|head -n 1`" -vn -c:a libfaac -ar 44100 -ac 2 -f flv rtmp://localhost/rtmp/stream; done

Git proxy

The Git version control system is capable of providing access to repositories not only via the Git and SSH protocols, but also via HTTP. Once upon a time, the implementation of HTTP access was primitive and unable to provide full-fledged work with the repository. With version 1.6.6, the situation has changed, and today this protocol can be used to, for example, bypass firewall restrictions on both sides of the connection or to create your own Git hosting with a web interface.

Unfortunately, the official documentation only talks about organizing access to Git using the Apache web server, but since the implementation itself is external application with a standard CGI interface, it can be attached to almost any other server, including lighttpd and, of course, nginx. This does not require anything except the server itself, installed Git and a small FastCGI server fcgiwrap, which is needed because nginx does not know how to work with CGI directly, but can call scripts using the FastCGI protocol.

The whole scheme of work will look like this. The fcgiwrap server will hang in the background and wait for a request to execute the CGI application. Nginx, in turn, will be configured to request execution of the git-http-backend CGI binary via the FastCGI interface every time the address we specify is accessed. Upon receiving the request, fcgiwrap executes git-http-backend with the specified CGI arguments passed by the GIT client and returns the result.

To implement such a scheme, first install fcgiwrap:

$ sudo apt-get install fcgiwrap

There is no need to configure it; all parameters are transmitted via the FastCGI protocol. It will also launch automatically. Therefore, all that remains is to configure nginx. To do this, create a file /etc/nginx/sites-enabled/git (if there is no such directory, you can write to the main config) and write the following into it:

# vi /etc/nginx/sites-enabled/git server ( # We are hanging on port 8080 listen 8080; # Our server address (don’t forget to add an entry in DNS) server_name git.example.ru; # Logs access_log /var/log/nginx /git-http-backend.access.log; error_log /var/log/nginx/git-http-backend.error.log; # Primary address for anonymous access location / ( # When trying to download, send the user to a private address if ($ arg_service ~* "git-receive-pack") ( rewrite ^ /private$uri last; ) include /etc/nginx/fastcgi_params; # Address of our git-http-backend fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git- http-backend; # Git repository address fastcgi_param GIT_PROJECT_ROOT /srv/git; # File address fastcgi_param PATH_INFO $uri; # fcgiwrap server address fastcgi_pass 127.0.0.1:9001; ) # Write access address location ~/private(/.* )$ ( # User permissions auth_basic "git anonymous read-only, authenticated write"; # HTTP authentication based on htpasswd auth_basic_user_file /etc/nginx/htpasswd; # FastCGI settings include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param GIT_PROJECT_ROOT /srv/git; fastcgi_param PATH_INFO $1; fastcgi_pass 127.0.0.1:9001; ) )

This config assumes three important things:

  • The repository address will be /srv/git, so we set the appropriate access rights: $ sudo chown -R www-data:www-data /srv/git
  • The repository itself must be open for reading by anonymous users and allow upload via HTTP: $ cd /srv/git $ git config core.sharedrepository true $ git config http.receivepack true
  • Authentication is carried out using the htpasswd file, you need to create it and add users to it: $ sudo apt-get install apache2-utils $ htpasswd -c /etc/nginx/htpasswd user1 $ htpasswd /etc/nginx/htpasswd user2 ...
  • That's all, restart nginx:

    Microcaching

    Let's imagine a situation with a dynamic, frequently updated website that suddenly begins to receive very heavy loads (well, it ended up on the page of one of the largest news sites) and ceases to cope with the return of content. Proper optimization and implementation of the correct caching scheme will take a long time, and problems need to be solved now. What we can do?

    There are several ways to get out of this situation with the least losses, but the most interesting idea was proposed by Fenn Bailey (fennb.com). The idea is to simply put nginx in front of the server and force it to cache all transmitted content, but not just cache, but for just one second. The twist here is that hundreds and thousands of site visitors per second will, in fact, generate only one request to the backend, receiving a mostly cached page. At the same time, hardly anyone will notice the difference, because even on a dynamic site one second usually means nothing.

    The config with the implementation of this idea will not look so complicated:

    # vi /etc/nginx/sites-enabled/cache-proxy # Cache configuration proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:5m max_size=1000m; server ( listen 80; server_name example.com; # Cached address location / ( # Cache is enabled by default set $no_cache ""; # Disable cache for all methods except GET and HEAD if ($request_method !~ ^(GET|HEAD) $) ( set $no_cache "1"; ) # If the client uploads content to the site (no_cache = 1), we make sure that the data given to him is not cached for two seconds and he can see the result of the download if ($no_cache = "1") ( add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/"; add_header X-Microcachable "0"; ) if ($http_cookie ~* "_mcnc") ( set $no_cache "1 "; ) # Enable/disable the cache depending on the state of the variable no_cache proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # Proxy requests to the real server proxy_pass http://appserver.example.ru; proxy_cache microcache; proxy_cache_key $scheme$host$request_method$ request_uri; proxy_cache_valid 200 1s; # Protection from the Thundering herd problem proxy_cache_use_stale updating; # Add standard headers proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # We do not cache files larger than 1 MB proxy_max_temp_file_size 1M; ) )

    A special place in this config is occupied by the line “proxy_cache_use_stale updating;”, without which we would have received periodic bursts of load on the backend server due to requests received during cache updating. Otherwise, everything is standard and should be clear without unnecessary explanation.

    Bringing proxies closer to target audience

    Despite the widespread global increase in Internet speeds, the physical distance of the server from the target audience still continues to play a role. This means that if a Russian site runs on a server located somewhere in America, the speed of access to it will be a priori slower than from a Russian server with the same channel width (of course, if you close your eyes to all other factors). Another thing is that placing servers abroad is often more profitable, including in terms of maintenance. Therefore, to get profit, in the form of higher payout rates, you will have to use some tricks.

    One of the possible options: place the main productive server in the West, and deploy a frontend that is not too resource-demanding and produces static data in Russia. This will allow you to gain speed without serious costs. The nginx config for the frontend in this case will be a simple and familiar proxy implementation to all of us:

    # vi /etc/nginx/sites-enabled/proxy # Store the cache for 30 days in 100 GB storage proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=static:32m inactive=30d max_size=100g; server ( listen 80; server_name example.com; # Actually, our proxy location ~* .(jpg|jpeg|gif|png|ico|css|midi|wav|bmp|js|swf|flv|avi|djvu|mp3) $ ( # Backend address proxy_pass back.example.com:80; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffer_size 16k; proxy_buffers 32 16k; proxy_cache static; proxy_cache_valid 30d; proxy_ignore_headers "Cache-Control" "Expires"; proxy_cache_key "$uri$is_args$args"; proxy_cache_lock on; ) )

    conclusions

    Today, with the help of nginx, you can solve many different problems, many of which are not related to the web server and the HTTP protocol at all. Mail proxy, streaming server and Git interface are just some of these tasks.

    Publications on the topic