Moving WordPress from One Server to Another

As a network administrator there comes a time when we need to reconfigure our server network(s) to some degree.  We do this in order to be more efficient and solve important issues. In the computer server world that means moving things such as hardware and services around. I have a couple of projects I found where I needed to move services from one computer server to another.

Part of this need to consolidate services onto a single machine was spurred on by the need to demonstrate to my site’s visitors that server security is important.  Besides, when hosting web sites and email accounts for others, those users want to know that where they are going is actually secure.

We’ve seen cyber security in the press a whole lot more, so people are more aware of security. Most of the time people get this information through news reports and sometimes it is through their own curiosity.  Today Google (Chrome) and Mozilla (the makers of Firefox) are trying to drive everyone to a secure HTTP protocol known as HTTPS.

The reason I felt is was more secure to consolidate services into a single machine is that two machines means, two pieces of equipment, two operating system install, and other duplicate software need to be maintained.  By maintained I mean the software has to be updated and secured.  Network management is made more complicated by having two.  For instance, when I update the configuration for fail2ban on one I need to duplicate that for services on the other machine.  I may need to update the router also to do things specific both machines.

Anyway, continuing on, earlier this month that’s what I was doing on a different network — setting up the server hardware, configuring software, and securing it.  I’d put together a server at an office for whom I do consulting and found that it was more convenient to have both the email services and web services on the same computer.

In the past I’d been unsure, and was thinking that maybe, if I still had two machines, if one machine were to go down, one part of the network would work while I worked on the other part.  I was unsure because combining the services into one would mean everything goes down at once. The old way allowed me to work on the email server while the web server kept running. Another part of the uneasiness about consolidating was that I felt is that if there’s a security flaw in one that it could lead to compromising both services.

I decided that I could do it properly.  I also decided that it would be a good idea to put together a backup email/web server that could be activated on demand. That is in the planning stages so it is something I’ll discuss at another time.

I thought to myself some time ago that I would like to split off some of the websites from the one server and have them hosted on another. In case you don’t know a server running the web server services can actually handle more than one site.  It just takes a bit more complex configuring.  But once you figure the process out you can put together any number of websites on a single machine.  In my case I have that setup done, but I wanted to be able to have more than one web server/email server hosting different sites.  There are some more complex issues involved in configuring the router/etc to accommodate this.  I have yet to investigate how to do that. I gather it has something to do with reverse DNS look-ups. That will be yet another post.

What I’m writing about today focuses on this week’s work where I consolidated these services onto one physical server machine. The big question was did I want to move my email services to the web server or move my web services to the email server? I had to choose which direction I wanted to go.

NOTE: In reading this don’t get confused by the term “server” and the term “services”.  A server is the whole of the unit and that unit runs “services” which are pieces of software satisfying a specific purpose.

In my lead up to this I had to make that decision. I decided that it would be easier to move the web services to the email server since the web server’s data is less complex. The email server depends more on the file system and that means dealing with more (and more complex) file permissions in addition to moving email server’s own SQL database (MySQL) data.

Starting with the web server I would export the contents of the databases to an .sql file (using an open source product called “mysqldump”). This would export the contents of the database and allow me to copy those dumps to the email server where I would first create a database on the email server (within MySQL) and then import the corresponding .sql file using that same program called “mysqldump”. In addition, I had to copy the existing WordPress installs for each domain over and adjust file permissions appropriately. This changeover also required that some new software be installed. On a positive note, since postfix email servers want to have web access the Apache web services were already installed on both, and for the most part correctly configured.

My first attempt at this was to export all the databases from the web server at once and then copy the .sql file over to the email server and then import them all at once. That seemed to work, however it created one very major issue. That issue is that since the web server had a mostly empty database called “mail”, and the email server did too, when I imported those databases using that .sql file I overwrote the email server’s mail database with the mostly empty one from the web server, thus I disabled my ability to send or receive emails for any of the domains that I hosted. That was a big mistake that I learned from.

I was able to recover from the mistake, because I was smart enough to (prior to doing anything) make a copy of the /var/lib/mysql folders on both machines.

With about 15 minutes to go before it was time to leave for the day I realized I’d made this mistake and backed out of it by simply renaming one current MySQL folder and copying back the backup MySQL folder. The only issue is that the copy process was done as root so it changed the permissions to root:root (note:  I’ve subsequently discovered that using the cp (copy command) I can pass an “-a” (e.g., “cp -a <source> <target>”) parameter and it will copy the permissions and ownership). I didn’t notice that at first so when I attempted to start the service manually at the command prompt I received messages saying that the service didn’t start. I examined the permissions and discovered that and then changed the permissions and tried again. I changed them from root:root to mysql:mysql.

This seemed to work so I went home with the intent that I would pick up first thing in the morning. While at home I used SSH (secure shell) to get into the servers using my tablet and I exported the individual databases instead of exporting the whole thing at once. I then copied the corresponding .sql files to the email server and called it a night.

The next day I came in and planned out my steps to getting this to work with consideration of the issues I’d experience the day before. I chose to do one website at a time. I’d already made a copy of all the files for wordpress onto the email machine so technically I just needed to ensure that the database name, username, and password were correct in /var/www/html/domain.com/wp-config.php file (where domain.com represents the domain name that corresponds to the website). When I did this I realized that I’d made the future maintenance process difficult because I’d used some generic names (those suggested by the website guide used to initially set wordpress up). I chose to correct that issue at this time even though it meant more work and more potential mess-up.

I renamed each of the .sql files to correspond to the websites that I was transferring. When I created the new mysql database for each website on the email machine I chose to use the website name for each database. That helped keep the mistakes and complexity to a minimum.

I also ensured that the username reflected the website name. Since my web server was going to host multiple websites I needed separate databases for each website and I chose separate users for each website too. This “user” accesses the mysql database for the website and that user can add or update records in the database tables for things like writing blog posts or creating web pages.

Anyway I got it all squared away and tested, however there were other issues that cropped up. One of those was that I needed to add the user and grant the user rights to the appropriate database and set their password. Then I had to test their access by logging them into mysql. This went remarkably well.

After doing the database, users, and passwords, and editing the wordpress config file and after importing the .sql data file into the corresponding website’s database all appeared well. I then thought to myself that it appeared that there was no simple way to test this. I decided the simplest way would be to just change the rule in pfsense to point to the email machine and then test. I other words, I would just adjust the pfsense firewall rule for the ports associated with the web server and switch them back to the original web server machine every time something failed. That way there would only be short periods of time when the web server appeared down or broken.

At the same time since I’d been working with letsencrypt certificates I had to ensure that I had the letsencrypt certbot program and dependencies installed. That took some time (find the correct PPA, add it, update, and install the program itself). There were two different PPAs. One was for cerbot and the other was for letsencrypt. I suspect one of the PPAs is maintained by a third party. I got it all installed but I still have to test this with a future renewal of the certificates.

Part of the reason (the main part) I started this switchover is because I wanted to use the certificates from the web server with the email server. The way it was designed the web server was on one machine and the email on another and using those certs after a renewal was extra work (in manually copying the certs files to the second server), plus there were other issues having to do with ports and not being able to log in as root using SSHFS. So, the natural solution was to move the web server to the email server machine and then just edit the configuration for postfix and dovecot to point to the location where the web server certificates were stored. On the setup for the customer that I consult for this is how I did it from the start. In my own case I was modifying my install, so that made it much harder. To accomplish this I had to copy the existing letsencrypt certificates from the web server machine to the email server machine and ensure that all the permissions were correct.

I accomplished this as well. Now the web server and email server use the same letsencrypt certificates so when I update the cert for the web server the email server benefits. I also have a prosody server and I want to get that working using those same certs, but that is a whole new ball of wax as prosody doesn’t run as root so it doesn’t have access to the certs due to a lack of permissions.

Anyway, to test this out I had to modify my pfsense based router’s configuration by changing and adding aliases used in the firewall’s NAT rules. This was done so that I could forward the appropriate ports to specific machines. In the past the email ports went to the email machine and the web ports went to the web server machine. Also I noted that I’d created individual rules for the web server machine ports whereas the email machine ports list were put in an alias. So I deleted the rules for each web server ports and created an alias to include references to all necessary ports. Then all I had to do was adjusted the NAT rules, thus eliminating multiple rules turning them in to a single rule with an alias. At this point it was time to do initial tests. I hoped that loading the web pages would work. Nope.

It took some time but I realized that when I was working on the virtual hosts setup in the Apache configs for the web server the config files were still pointing to the old web server machine’s IP. It took quite some number of tests to narrow down what was happening. I changed those config files and tried again. That finally worked.

About 2 weeks ago when I discovered that letsencrypt would no longer allow domain names without a sub-domain (as the renewal would fail verification) I redid the certificates and decided that I would try to redirect requests from the URL where the users left off the www. part. I’d need to add that for them auto-magically. Otherwise the web sites would indicate that the site was insecure. So, domain.com would translate to https://domain.com and that would trigger the unsafe site prompt in Firefox. What I needed to do was redirect all traffic to https://www.domain.com regardless of what they entered in the URL.

This made me look at various things in the registrar DNS records and thought that the solution would be easy if I just created a CNAME record. Nope, didn’t do it.

I tried editing the .htaccess file but to no avail. And, by the way, one thing to note is that the cache of the Firefox browser caused no end of grief. So, don’t be convinced that the redirect is working as Firefox caches redirects. You need to flush the cache and test over and over.

I managed to get the .htaccess correct and now all sites seems to redirect properly to https://www.domain.com.

One more issue was that though the 3 main sites that I manage were working properly the 4th one seemed to redirect visitors to first website (it chose the web site to redirect to alphabetically — amazing). Well, I found that I’d failed to a2ensite it and did so. I guess that was only one of the problems since it still failed and redirected to the first website. I then looked at the sites-available config files and found that I’d failed to correct the IP address in that web site’s configs. Once that was completed I was able to reload apache2 and test. Everything worked.

Last night before heading to bed I decided to do a thorough test of editing. After all I’d just done a major change and needed to make sure that I could add and update the pages of the sites. Further some pages in wordpress indicated that they were secure while others didn’t. I’d set up my site before implementing HTTPS so there were references to http://www.domain.com that needed to be changed to HTTPS. I did this and saved each page/post then reloaded each page in the browser finding that the issues were resolved. Good.

During my testing of site use I noticed one disturbing thing. I’d created some blog posts that pointed to some external sites that discussed security issues. Over a couple of years I’d read them and they seemed OK, but last night while I was confirming that all the pages were loading as secure I was prompted on 3 blog posts with links to an external site that asked readers for a username and password. This was from the site I’d linked in my post. They prompted my users. It happened because I’d used the oembed feature in wordpress to embed part of their site into my blog post. This was disturbing because the behavior wasn’t there before and it was asking people for usernames and passwords. Most people wouldn’t know what this was. I think they’d give the information. It seemed to me to be potentially a fishing scam or the site was infected and the site’s owner didn’t know it, or the whole thing was a spoof from the start. The easy solution to this was just to delete those blog posts so I did. Yay, no more prompts. We all need to be aware of these potential privacy and security violations coming from some seemingly respectable sites.