M2M2: Installing certbot

During setup and installation of the server for M2 I found that it wouldn’t run on an insecure IP address but it requires a FQDN with valid SSL certification. So I registered a .dev domain, because, well, why not.

Just for full-disclosures sake, here’s the path for this;

  • Registered domain
  • Updated DNS to point to the newly reserved IP address for the M2 server
  • Installed certbot;
sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx
  • Generate DH encryption;
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
  • Set the correct hostnames and whatnot to allow certbot to function;
sudo nano /etc/nginx/sites-available/example.com
server {
    listen 80;
    listen [::]:80;

    root /var/www/example.com/public_html;

    index index.html;

    server_name example.com www.example.com;

    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;

    location / {
        try_files $uri $uri/ =404;
  • Create a catch-all symlink for enabled sites in nginx and reboot;
sudo rm -rf /etc/nginx/sites-enabled/*
sudo ln -s /etc/nginx/sites-available/* /etc/nginx/sites-enabled
sudo service nginx restart
  • Run certbot;
sudo certbot --nginx certonly

M2M2: The server

I was going to get my existing clean LEMP server install based on Ubuntu 14.04 LTS and update it. But I decided to get a clean slate and create a new image based on a clean Ubuntu 18.04 LTS version and start from scratch. What follows is more or less a direct copy of this article, but I like to run through it for myself and note down everything here as well.

So I spin up a new instance based on Ubuntu 18.04-LTS in Google Compute Cloud (not going indepth on that here). Then we logon via SSH and run these commands to update all and install the unzip and mysql tools;

Set up server and MySQL

sudo apt update && sudo apt upgrade
sudo apt install unzip
sudo apt install mysql-server mysql-client
sudo mysql_secure_installation

At this point we’ll create a database for our test;

sudo mysql
mysql > CREATE DATABASE m2test;
mysql > CREATE USER m2test;
mysql > GRANT ALL ON m2test.* TO 'm2test'@'localhost' IDENTIFIED BY 'your-strong-password-for-user';
mysql > exit;

Add system user and PHP

From my previous experiences working with Magento 2.0 just after it launched was that the user-management serverwise was a pain in the ass. So let’s set it up correctly from the get-go; we setup a user magento and add www-data user to the same group;

sudo useradd -m -U -r -d /opt/magento magento 
sudo usermod -a -G magento www-data
sudo chmod 750 /opt/magento

PHP7.2 is the new standard in Ubuntu 18.04-LTS but we use Nginx so we want the PHP-FPM version (too). These commands install them, and should output the status (which should be active);

sudo apt install php7.2-common php7.2-cli php7.2-fpm php7.2-opcache php7.2-gd php7.2-mysql php7.2-curl php7.2-intl php7.2-xsl php7.2-mbstring php7.2-zip php7.2-bcmath php7.2-soap

sudo systemctl status php7.2-fpm

And next we set the required PHP settings via this handy dandy snippet;

sudo sed -i "s/memory_limit = .*/memory_limit = 1024M/" /etc/php/7.2/fpm/php.ini
sudo sed -i "s/upload_max_filesize = .*/upload_max_filesize = 256M/" /etc/php/7.2/fpm/php.ini
sudo sed -i "s/zlib.output_compression = .*/zlib.output_compression = on/" /etc/php/7.2/fpm/php.ini
sudo sed -i "s/max_execution_time = .*/max_execution_time = 18000/" /etc/php/7.2/fpm/php.ini
sudo sed -i "s/;date.timezone.*/date.timezone = UTC/" /etc/php/7.2/fpm/php.ini
sudo sed -i "s/;opcache.save_comments.*/opcache.save_comments = 1/" /etc/php/7.2/fpm/php.in

Then we create a PHP pool for the Magento user

sudo nano /etc/php/7.2/pool.d/magento.conf

user = magento
group = www-data
listen.owner = magento
listen.group = www-data
listen = /var/run/php/php7.2-fpm-magento.sock
pm = ondemand
pm.max_children = 50
pm.process_idle_timeout = 10s
pm.max_requests = 500
chdir = /

Then restart the PHP service and check if the socket is running correctly

sudo service php7.2-fpm restart

ls -al /var/run/php/php7.2-fpm-magento.sock

Setup Composer

My brief encounter with Magento 2.0 made me realize the huge potentional of tools such as composer and modman. So I “retro fitted” my Magento 1.9.x installations to sort-of fit the same workflow. Hopefully the M2 experience will be way smoother then that! Installing it is still the smoothest with this oneliner;

curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer

Install Magento

This is where we get freaky. So far everything has been more or less the same with our 1.9 setup the past years, but now we get into M2 setup and best-practices, and it’s time to apply all I’ve learned. Starting with, having a magento user. We switch to it and then using composer we download the latest release to our (to be) server root. Running below commands will prompt for username/password; do not be led astray by this; it needs your public/private key combo you can generate in your Magento Marketplace profile.

sudo su - magento
composer create-project --repository-url=https://repo.magento.com/ magento/project-community-edition /opt/magento/public_html

My biggest issue with the development workflow on Magento 1.9.x was the autoload feature of composer. I’m hoping this is no issue in M2 because it relies on it out of the box, rather then being shoved in later.

So far this is looking good; creating the project via composer this way automagically gets all php depencies so it’s far less hassle to pull in all seperate requirements by hand via ssh. It’s easier, more secure and less error-prone. Now that is what I call progress.

Installing Magento

Once composer is done autoloading everything, we can install magento with this command. Of course we fill it with our actual details;

php bin/magento setup:install --base-url=https://example.com/ \
--base-url-secure=https://example.com/ \
--admin-firstname="John" \
--admin-lastname="Doe" \
--admin-email="john@example.com" \
--admin-user="john" \
--admin-password="enter-your-strong-password" \
--db-name="magento" \
--db-host="localhost" \
--db-user="magento" \
--currency=EUR \
--timezone=America/Chicago \
--use-rewrites=1 \

And here I run in to the fact that Magento only will install on a FQDN with SSL. So check this and come back here, and run above command again

At this point you should see the complete installation progress rolling by, ending in a successmessage with a admin-url endpoint.

And then..

At this point we need to update our Nginx server configuration with a Magento-enabled one, but this is where I run into the first roadblock as I am unsure how to properly point to all lets-encrypt business. To be continued…

M2M2: The how-to

We’re “Moving to Magento 2”. The “we” in this case is the company I work for and where I’m responsible for all things tech-related. Which, seeing as this business is an e-commerce business, is a big thing. We have been running about 20 Magento 1.9.x shops for a better part of two years now, but with all the things reaching EOL it is time to start the migration.

Back then, when we decided to put up 1.9.x for all stores, Magento 2’s launch had already happened. But M2’s launch was very, very instable and full of bugs so we decided to play it safe and let it mature a bit before moving onto it ourselves. Well that time is finally here.

I’ll be using this blog to log every step of the way, because it is a big project and I’m bound to forget and overlook things at some point. Written documentation is key. So this first entry is the rough planning of the first test;

  • Set up clean (M1.9) LEMP server
  • Update dependencies to meet M2 server reqs
  • Install and configure M2
  • Set up Migration tool for M1 -> M2
  • Run export/import of data
  • Set up needed modules as much as possible
  • Test and explore!

Once this is done, we should have a decent idea of what functionality is missing out of the box, and what to expect while migrating all product and order data (not looking forward to this..). This will give us a better understanding of all things involved and enable us to create a decent planning.